summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2023-02-16kvm: initialize all of the kvm_debugregs structure before sending it to ↵Greg Kroah-Hartman
userspace When calling the KVM_GET_DEBUGREGS ioctl, on some configurations, there might be some unitialized portions of the kvm_debugregs structure that could be copied to userspace. Prevent this as is done in the other kvm ioctls, by setting the whole structure to 0 before copying anything into it. Bonus is that this reduces the lines of code as the explicit flag setting and reserved space zeroing out can be removed. Cc: Sean Christopherson <seanjc@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: <x86@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: stable <stable@kernel.org> Reported-by: Xingyuan Mo <hdthky0@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Message-Id: <20230214103304.3689213-1-gregkh@linuxfoundation.org> Tested-by: Xingyuan Mo <hdthky0@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-16KVM: x86/mmu: Make tdp_mmu_allowed staticDavid Matlack
Make tdp_mmu_allowed static since it is only ever used within arch/x86/kvm/mmu/mmu.c. Link: https://lore.kernel.org/kvm/202302072055.odjDVd5V-lkp@intel.com/ Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: David Matlack <dmatlack@google.com> Message-Id: <20230213212844.3062733-1-dmatlack@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-16x86/hyperv: Fix hv_get/set_register for nested bringupNuno Das Neves
hv_get_nested_reg only translates SINT0, resulting in the wrong sint being registered by nested vmbus. Fix the issue with new utility function hv_is_sint_reg. While at it, improve clarity of hv_set_non_nested_register and hv_is_synic_reg. Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com> Reviewed-by: Jinank Jain <jinankjain@linux.microsoft.com> Link: https://lore.kernel.org/r/1675980172-6851-1-git-send-email-nunodasneves@linux.microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2023-02-15Merge tag 'kvm-riscv-6.3-1' of https://github.com/kvm-riscv/linux into HEADPaolo Bonzini
KVM/riscv changes for 6.3 - Fix wrong usage of PGDIR_SIZE to check page sizes - Fix privilege mode setting in kvm_riscv_vcpu_trap_redirect() - Redirect illegal instruction traps to guest - SBI PMU support for guest
2023-02-15Merge tag 'kvm-x86-vmx-6.3' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini
KVM VMX changes for 6.3: - Handle NMI VM-Exits before leaving the noinstr region - A few trivial cleanups in the VM-Enter flows - Stop enabling VMFUNC for L1 purely to document that KVM doesn't support EPTP switching (or any other VM function) for L1 - Fix a crash when using eVMCS's enlighted MSR bitmaps
2023-02-15Merge tag 'kvm-x86-svm-6.3' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini
KVM SVM changes for 6.3: - Fix a mostly benign overflow bug in SEV's send|receive_update_data() - Move the SVM-specific "host flags" into vcpu_svm (extracted from the vNMI enabling series) - A handful for fixes and cleanups
2023-02-15Merge branches 'pm-cpuidle', 'pm-core' and 'pm-sleep'Rafael J. Wysocki
Merge cpuidle updates, PM core updates and changes related to system sleep handling for 6.3-rc1: - Make the TEO cpuidle governor check CPU utilization in order to refine idle state selection (Kajetan Puchalski). - Make Kconfig select the haltpoll cpuidle governor when the haltpoll cpuidle driver is selected and replace a default_idle() call in that driver with arch_cpu_idle() which allows MWAIT to be used (Li RongQing). - Add Emerald Rapids Xeon support to the intel_idle driver (Artem Bityutskiy). - Add ARCH_SUSPEND_POSSIBLE dependencies for ARMv4 cpuidle drivers to avoid randconfig build failures (Arnd Bergmann). - Make kobj_type structures used in the cpuidle sysfs interface constant (Thomas Weißschuh). - Make the cpuidle driver registration code update microsecond values of idle state parameters in accordance with their nanosecond values if they are provided (Rafael Wysocki). - Make the PSCI cpuidle driver prevent topology CPUs from being suspended on PREEMPT_RT (Krzysztof Kozlowski). - Document that pm_runtime_force_suspend() cannot be used with DPM_FLAG_SMART_SUSPEND (Richard Fitzgerald). - Add EXPORT macros for exporting PM functions from drivers (Richard Fitzgerald). - Drop "select SRCU" from system sleep Kconfig (Paul E. McKenney). - Remove /** from non-kernel-doc comments in hibernation code (Randy Dunlap). * pm-cpuidle: cpuidle: psci: Do not suspend topology CPUs on PREEMPT_RT cpuidle: driver: Update microsecond values of state parameters as needed cpuidle: sysfs: make kobj_type structures constant cpuidle: add ARCH_SUSPEND_POSSIBLE dependencies intel_idle: add Emerald Rapids Xeon support cpuidle-haltpoll: Replace default_idle() with arch_cpu_idle() cpuidle-haltpoll: select haltpoll governor cpuidle: teo: Introduce util-awareness cpuidle: teo: Optionally skip polling states in teo_find_shallower_state() * pm-core: PM: Add EXPORT macros for exporting PM functions PM: runtime: Document that force_suspend() is incompatible with SMART_SUSPEND * pm-sleep: PM: sleep: Remove "select SRCU" PM: hibernate: swap: don't use /** for non-kernel-doc comments
2023-02-15perf/x86: Refuse to export capabilities for hybrid PMUsSean Christopherson
Now that KVM disables vPMU support on hybrid CPUs, WARN and return zeros if perf_get_x86_pmu_capability() is invoked on a hybrid CPU. The helper doesn't provide an accurate accounting of the PMU capabilities for hybrid CPUs and needs to be enhanced if KVM, or anything else outside of perf, wants to act on the PMU capabilities. Cc: stable@vger.kernel.org Cc: Andrew Cooper <Andrew.Cooper3@citrix.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Link: https://lore.kernel.org/all/20220818181530.2355034-1-kan.liang@linux.intel.com Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20230208204230.1360502-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-15KVM: x86/pmu: Disable vPMU support on hybrid CPUs (host PMUs)Sean Christopherson
Disable KVM support for virtualizing PMUs on hosts with hybrid PMUs until KVM gains a sane way to enumeration the hybrid vPMU to userspace and/or gains a mechanism to let userspace opt-in to the dangers of exposing a hybrid vPMU to KVM guests. Virtualizing a hybrid PMU, or at least part of a hybrid PMU, is possible, but it requires careful, deliberate configuration from userspace. E.g. to expose full functionality, vCPUs need to be pinned to pCPUs to prevent migrating a vCPU between a big core and a little core, userspace must enumerate a reasonable topology to the guest, and guest CPUID must be curated per vCPU to enumerate accurate vPMU capabilities. The last point is especially problematic, as KVM doesn't control which pCPU it runs on when enumerating KVM's vPMU capabilities to userspace, i.e. userspace can't rely on KVM_GET_SUPPORTED_CPUID in it's current form. Alternatively, userspace could enable vPMU support by enumerating the set of features that are common and coherent across all cores, e.g. by filtering PMU events and restricting guest capabilities. But again, that requires userspace to take action far beyond reflecting KVM's supported feature set into the guest. For now, simply disable vPMU support on hybrid CPUs to avoid inducing seemingly random #GPs in guests, and punt support for hybrid CPUs to a future enabling effort. Reported-by: Jianfeng Gao <jianfeng.gao@intel.com> Cc: stable@vger.kernel.org Cc: Andrew Cooper <Andrew.Cooper3@citrix.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Link: https://lore.kernel.org/all/20220818181530.2355034-1-kan.liang@linux.intel.com Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20230208204230.1360502-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-15Merge tag 'kvm-x86-pmu-6.3' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini
KVM x86 PMU changes for 6.3: - Add support for created masked events for the PMU filter to allow userspace to heavily restrict what events the guest can use without needing to create an absurd number of events - Clean up KVM's handling of "PMU MSRs to save", especially when vPMU support is disabled - Add PEBS support for Intel SPR
2023-02-15Merge tag 'kvm-x86-mmu-6.3' of https://github.com/kvm-x86/linux into HEADPaolo Bonzini
KVM x86 MMU changes for 6.3: - Fix and cleanup the range-based TLB flushing code, used when KVM is running on Hyper-V - A few one-off cleanups
2023-02-15x86/build: Make 64-bit defconfig the defaultArnd Bergmann
Running 'make ARCH=x86 defconfig' on anything other than an x86_64 machine currently results in a 32-bit build, which is rarely what anyone wants these days. Change the default so that the 64-bit config gets used unless the user asks for i386_defconfig, uses ARCH=i386 or runs on a system that "uname -m" identifies as i386/i486/i586/i686. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20230215091706.1623070-1-arnd@kernel.org
2023-02-15dma-mapping: no need to pass a bus_type into get_arch_dma_ops()Greg Kroah-Hartman
The get_arch_dma_ops() arch-specific function never does anything with the struct bus_type that is passed into it, so remove it entirely as it is not needed. Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Helge Deller <deller@gmx.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: x86@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: linux-alpha@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-ia64@vger.kernel.org Cc: linux-mips@vger.kernel.org Cc: linux-parisc@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: iommu@lists.linux.dev Cc: linux-arch@vger.kernel.org Acked-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20230214140121.131859-1-gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-02-14x86/hotplug: Remove incorrect comment about mwait_play_dead()Srivatsa S. Bhat (VMware)
The comment that says mwait_play_dead() returns only on failure is a bit misleading because mwait_play_dead() could actually return for valid reasons (such as mwait not being supported by the platform) that do not indicate a failure of the CPU offline operation. So, remove the comment. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230128003751.141317-1-srivatsa@csail.mit.edu
2023-02-14Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "Certain AMD processors are vulnerable to a cross-thread return address predictions bug. When running in SMT mode and one of the sibling threads transitions out of C0 state, the other thread gets access to twice as many entries in the RSB, but unfortunately the predictions of the now-halted logical processor are not purged. Therefore, the executing processor could speculatively execute from locations that the now-halted processor had trained the RSB on. The Spectre v2 mitigations cover the Linux kernel, as it fills the RSB when context switching to the idle thread. However, KVM allows a VMM to prevent exiting guest mode when transitioning out of C0 using the KVM_CAP_X86_DISABLE_EXITS capability can be used by a VMM to change this behavior. To mitigate the cross-thread return address predictions bug, a VMM must not be allowed to override the default behavior to intercept C0 transitions. These patches introduce a KVM module parameter that, if set, will prevent the user from disabling the HLT, MWAIT and CSTATE exits" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: Documentation/hw-vuln: Add documentation for Cross-Thread Return Predictions KVM: x86: Mitigate the cross-thread return address predictions bug x86/speculation: Identify processors vulnerable to SMT RSB predictions
2023-02-14x86/mtrr: Revert 90b926e68f50 ("x86/pat: Fix pat_x_mtrr_type() for MTRR ↵Juergen Gross
disabled case") Commit 90b926e68f50 ("x86/pat: Fix pat_x_mtrr_type() for MTRR disabled case") broke the use case of running Xen dom0 kernels on machines with an external disk enclosure attached via USB, see Link tag. What this commit was originally fixing - SEV-SNP guests on Hyper-V - is a more specialized situation which has other issues at the moment anyway so reverting this now and addressing the issue properly later is the prudent thing to do. So revert it in time for the 6.2 proper release. [ bp: Rewrite commit message. ] Reported-by: Christian Kujau <lists@nerdbynature.de> Tested-by: Christian Kujau <lists@nerdbynature.de> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/4fe9541e-4d4c-2b2a-f8c8-2d34a7284930@nerdbynature.de
2023-02-14crypto: x86/aria-avx - Do not use avx2 instructionsTaehee Yoo
vpbroadcastb and vpbroadcastd are not AVX instructions. But the aria-avx assembly code contains these instructions. So, kernel panic will occur if the aria-avx works on AVX2 unsupported CPU. vbroadcastss, and vpshufb are used to avoid using vpbroadcastb in it. Unfortunately, this change reduces performance by about 5%. Also, vpbroadcastd is simply replaced by vmovdqa in it. Fixes: ba3579e6e45c ("crypto: aria-avx - add AES-NI/AVX/x86_64/GFNI assembler implementation of aria cipher") Reported-by: Herbert Xu <herbert@gondor.apana.org.au> Reported-by: Erhard F. <erhard_f@mailbox.org> Signed-off-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13Merge branch kvm-arm64/misc into kvmarm/nextOliver Upton
* kvm-arm64/misc: : Miscellaneous updates : : - Convert CPACR_EL1_TTA to the new, generated system register : definitions. : : - Serialize toggling CPACR_EL1.SMEN to avoid unexpected exceptions when : accessing SVCR in the host. : : - Avoid quiescing the guest if a vCPU accesses its own redistributor's : SGIs/PPIs, eliminating the need to IPI. Largely an optimization for : nested virtualization, as the L1 accesses the affected registers : rather often. : : - Conversion to kstrtobool() : : - Common definition of INVALID_GPA across architectures : : - Enable CONFIG_USERFAULTFD for CI runs of KVM selftests KVM: arm64: Fix non-kerneldoc comments KVM: selftests: Enable USERFAULTFD KVM: selftests: Remove redundant setbuf() arm64/sysreg: clean up some inconsistent indenting KVM: MMU: Make the definition of 'INVALID_GPA' common KVM: arm64: vgic-v3: Use kstrtobool() instead of strtobool() KVM: arm64: vgic-v3: Limit IPI-ing when accessing GICR_{C,S}ACTIVER0 KVM: arm64: Synchronize SMEN on vcpu schedule out KVM: arm64: Kill CPACR_EL1_TTA definition Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-02-13Merge branch kvm/kvm-hw-enable-refactor into kvmarm/nextOliver Upton
Merge the kvm_init() + hardware enable rework to avoid conflicts with kvmarm. Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
2023-02-13char/agp: consolidate {alloc,free}_gatt_pages()Mike Rapoport
There is a copy of alloc_gatt_pages() and free_gatt_pages in several architectures in arch/$ARCH/include/asm/agp.h. All the copies do exactly the same: alias alloc_gatt_pages() to __get_free_pages(GFP_KERNEL) and alias free_gatt_pages() to free_pages(). Define alloc_gatt_pages() and free_gatt_pages() in drivers/char/agp/agp.h and drop per-architecture definitions. Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-02-13x86/uv: Use irq_domain_create_hierarchy()Johan Hovold
Use the irq_domain_create_hierarchy() helper to create the hierarchical domain, which both serves as documentation and avoids poking at irqdomain internals. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Tested-by: Hsin-Yi Wang <hsinyi@chromium.org> Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com> Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230213104302.17307-14-johan+linaro@kernel.org
2023-02-13x86/ioapic: Use irq_domain_create_hierarchy()Johan Hovold
Use the irq_domain_create_hierarchy() helper to create the hierarchical domain, which both serves as documentation and avoids poking at irqdomain internals. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Tested-by: Hsin-Yi Wang <hsinyi@chromium.org> Tested-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com> Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230213104302.17307-13-johan+linaro@kernel.org
2023-02-13Merge tag 'clocksource.2023.02.06b' of ↵Thomas Gleixner
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into timers/core Pull clocksource watchdog changes from Paul McKenney: o Improvements to clocksource-watchdog console messages. o Loosening of the clocksource-watchdog skew criteria to match those of NTP (500 parts per million, relaxed from 400 parts per million). If it is good enough for NTP, it is good enough for the clocksource watchdog. o Suspend clocksource-watchdog checking temporarily when high memory latencies are detected. This avoids the false-positive clock-skew events that have been seen on production systems running memory-intensive workloads. o On systems where the TSC is deemed trustworthy, use it as the watchdog timesource, but only when specifically requested using the tsc=watchdog kernel boot parameter. This permits clock-skew events to be detected, but avoids forcing workloads to use the slow HPET and ACPI PM timers. These last two timers are slow enough to cause systems to be needlessly marked bad on the one hand, and real skew does sometimes happen on production systems running production workloads on the other. And sometimes it is the fault of the TSC, or at least of the firmware that told the kernel to program the TSC with the wrong frequency. o Add a tsc=revalidate kernel boot parameter to allow the kernel to diagnose cases where the TSC hardware works fine, but was told by firmware to tick at the wrong frequency. Such cases are rare, but they really have happened on production systems. Link: https://lore.kernel.org/r/20230210193640.GA3325193@paulmck-ThinkPad-P17-Gen-1
2023-02-13um: Support LTOPeter Foley
Only a handful of changes are necessary to get it to work. Signed-off-by: Peter Foley <pefoley2@pefoley.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-02-13x86/xen/time: prefer tsc as clocksource when it is invariantKrister Johansen
Kvm elects to use tsc instead of kvm-clock when it can detect that the TSC is invariant. (As of commit 7539b174aef4 ("x86: kvmguest: use TSC clocksource if invariant TSC is exposed")). Notable cloud vendors[1] and performance engineers[2] recommend that Xen users preferentially select tsc over xen-clocksource due the performance penalty incurred by the latter. These articles are persuasive and tailored to specific use cases. In order to understand the tradeoffs around this choice more fully, this author had to reference the documented[3] complexities around the Xen configuration, as well as the kernel's clocksource selection algorithm. Many users may not attempt this to correctly configure the right clock source in their guest. The approach taken in the kvm-clock module spares users this confusion, where possible. Both the Intel SDM[4] and the Xen tsc documentation explain that marking a tsc as invariant means that it should be considered stable by the OS and is elibile to be used as a wall clock source. In order to obtain better out-of-the-box performance, and reduce the need for user tuning, follow kvm's approach and decrease the xen clock rating so that tsc is preferable, if it is invariant, stable, and the tsc will never be emulated. [1] https://aws.amazon.com/premiumsupport/knowledge-center/manage-ec2-linux-clock-source/ [2] https://www.brendangregg.com/blog/2021-09-26/the-speed-of-time.html [3] https://xenbits.xen.org/docs/unstable/man/xen-tscmode.7.html [4] Intel 64 and IA-32 Architectures Sofware Developer's Manual Volume 3b: System Programming Guide, Part 2, Section 17.17.1, Invariant TSC Signed-off-by: Krister Johansen <kjlx@templeofstupid.com> Code-reviewed-by: David Reaver <me@davidreaver.com> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20221216162118.GB2633@templeofstupid.com Signed-off-by: Juergen Gross <jgross@suse.com>
2023-02-13x86/xen: mark xen_pv_play_dead() as __noreturnJuergen Gross
Mark xen_pv_play_dead() and related to that xen_cpu_bringup_again() as "__noreturn". Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20221125063248.30256-3-jgross@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2023-02-13x86/xen: don't let xen_pv_play_dead() returnJuergen Gross
A function called via the paravirt play_dead() hook should not return to the caller. xen_pv_play_dead() has a problem in this regard, as it currently will return in case an offlined cpu is brought to life again. This can be changed only by doing basically a longjmp() to cpu_bringup_and_idle(), as the hypercall for bringing down the cpu will just return when the cpu is coming up again. Just re-initializing the cpu isn't possible, as the Xen hypervisor will deny that operation. So introduce xen_cpu_bringup_again() resetting the stack and calling cpu_bringup_and_idle(), which can be called after HYPERVISOR_vcpu_op() in xen_pv_play_dead(). Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20221125063248.30256-2-jgross@suse.com Signed-off-by: Juergen Gross <jgross@suse.com>
2023-02-11perf/x86/intel/uncore: Add Meteor Lake supportKan Liang
The uncore subsystem for Meteor Lake is similar to the previous Alder Lake. The main difference is that MTL provides PMU support for different tiles, while ADL only provides PMU support for the whole package. On ADL, there are CBOX, ARB, and clockbox uncore PMON units. On MTL, they are split into CBOX/HAC_CBOX, ARB/HAC_ARB, and cncu/sncu which provides a fixed counter for clockticks. Also, new MSR addresses are introduced on MTL. The IMC uncore PMON is the same as Alder Lake. Add new PCIIDs of IMC for Meteor Lake. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20230210190238.1726237-1-kan.liang@linux.intel.com
2023-02-11x86/entry: Fix unwinding from kprobe on PUSH/POP instructionJosh Poimboeuf
If a kprobe (INT3) is set on a stack-modifying single-byte instruction, like a single-byte PUSH/POP or a LEAVE, ORC fails to unwind past it: Call Trace: <TASK> dump_stack_lvl+0x57/0x90 handler_pre+0x33/0x40 [kprobe_example] aggr_pre_handler+0x49/0x90 kprobe_int3_handler+0xe3/0x180 do_int3+0x3a/0x80 exc_int3+0x7d/0xc0 asm_exc_int3+0x35/0x40 RIP: 0010:kernel_clone+0xe/0x3a0 Code: cc e8 16 b2 bf 00 66 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 41 57 41 56 41 55 41 54 cc <53> 48 89 fb 48 83 ec 68 4c 8b 27 65 48 8b 04 25 28 00 00 00 48 89 RSP: 0018:ffffc9000074fda0 EFLAGS: 00000206 RAX: 0000000000808100 RBX: ffff888109de9d80 RCX: 0000000000000000 RDX: 0000000000000011 RSI: ffff888109de9d80 RDI: ffffc9000074fdc8 RBP: ffff8881019543c0 R08: ffffffff81127e30 R09: 00000000e71742a5 R10: ffff888104764a18 R11: 0000000071742a5e R12: ffff888100078800 R13: ffff888100126000 R14: 0000000000000000 R15: ffff888100126005 ? __pfx_call_usermodehelper_exec_async+0x10/0x10 ? kernel_clone+0xe/0x3a0 ? user_mode_thread+0x5b/0x80 ? __pfx_call_usermodehelper_exec_async+0x10/0x10 ? call_usermodehelper_exec_work+0x77/0xb0 ? process_one_work+0x299/0x5f0 ? worker_thread+0x4f/0x3a0 ? __pfx_worker_thread+0x10/0x10 ? kthread+0xf2/0x120 ? __pfx_kthread+0x10/0x10 ? ret_from_fork+0x29/0x50 </TASK> The problem is that #BP saves the pointer to the instruction immediately *after* the INT3, rather than to the INT3 itself. The instruction replaced by the INT3 hasn't actually run, but ORC assumes otherwise and expects the wrong stack layout. Fix it by annotating the #BP exception as a non-signal stack frame, which tells the ORC unwinder to decrement the instruction pointer before looking up the corresponding ORC entry. Reported-by: Chen Zhongjin <chenzhongjin@huawei.com> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/baafcd3cc1abb14cb757fe081fa696012a5265ee.1676068346.git.jpoimboe@kernel.org
2023-02-11x86/unwind/orc: Add 'signal' field to ORC metadataJosh Poimboeuf
Add a 'signal' field which allows unwind hints to specify whether the instruction pointer should be taken literally (like for most interrupts and exceptions) rather than decremented (like for call stack return addresses) when used to find the next ORC entry. Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/d2c5ec4d83a45b513d8fd72fab59f1a8cfa46871.1676068346.git.jpoimboe@kernel.org
2023-02-11x86/perf/zhaoxin: Add stepping check for ZXCsilviazhao
Some of Nano series processors will lead GP when accessing PMC fixed counter. Meanwhile, their hardware support for PMC has not announced externally. So exclude Nano CPUs from ZXC by checking stepping information. This is an unambiguous way to differentiate between ZXC and Nano CPUs. Following are Nano and ZXC FMS information: Nano FMS: Family=6, Model=F, Stepping=[0-A][C-D] ZXC FMS: Family=6, Model=F, Stepping=E-F OR Family=6, Model=0x19, Stepping=0-3 Fixes: 3a4ac121c2ca ("x86/perf: Add hardware performance events support for Zhaoxin CPU.") Reported-by: Arjan <8vvbbqzo567a@nospam.xutrox.com> Reported-by: Kevin Brace <kevinbrace@gmx.com> Signed-off-by: silviazhao <silviazhao-oc@zhaoxin.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://bugzilla.kernel.org/show_bug.cgi?id=212389
2023-02-11perf/x86/intel/ds: Fix the conversion from TSC to perf timeKan Liang
The time order is incorrect when the TSC in a PEBS record is used. $perf record -e cycles:upp dd if=/dev/zero of=/dev/null count=10000 $ perf script --show-task-events perf-exec 0 0.000000: PERF_RECORD_COMM: perf-exec:915/915 dd 915 106.479872: PERF_RECORD_COMM exec: dd:915/915 dd 915 106.483270: PERF_RECORD_EXIT(915:915):(914:914) dd 915 106.512429: 1 cycles:upp: ffffffff96c011b7 [unknown] ([unknown]) ... ... The perf time is from sched_clock_cpu(). The current PEBS code unconditionally convert the TSC to native_sched_clock(). There is a shift between the two clocks. If the TSC is stable, the shift is consistent, __sched_clock_offset. If the TSC is unstable, the shift has to be calculated at runtime. This patch doesn't support the conversion when the TSC is unstable. The TSC unstable case is a corner case and very unlikely to happen. If it happens, the TSC in a PEBS record will be dropped and fall back to perf_event_clock(). Fixes: 47a3aeb39e8d ("perf/x86/intel/pebs: Fix PEBS timestamps overwritten") Reported-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/CAM9d7cgWDVAq8-11RbJ2uGfwkKD6fA-OMwOKDrNUrU_=8MgEjg@mail.gmail.com/
2023-02-11x86/tsc: Do feature check as the very first thingBorislav Petkov (AMD)
Do the feature check as the very first thing in the function. Everything else comes after that and is meaningless work if the TSC CPUID bit is not even set. Switch to cpu_feature_enabled() too, while at it. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/Y5990CUCuWd5jfBH@zn.tnic
2023-02-11x86/tsc: Make recalibrate_cpu_khz() export GPL onlyBorislav Petkov (AMD)
A quick search doesn't reveal any use outside of the kernel - which would be questionable to begin with anyway - so make the export GPL only. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/Y599miBzWRAuOwhg@zn.tnic
2023-02-11x86/cacheinfo: Remove unused trace variableBorislav Petkov (AMD)
15cd8812ab2c ("x86: Remove the CPU cache size printk's") removed the last use of the trace local var. Remove it too and the useless trace cache case. No functional changes. Reported-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230210234541.9694-1-bp@alien8.de Link: http://lore.kernel.org/r/20220705073349.1512-1-jiapeng.chong@linux.alibaba.com
2023-02-10x86: um: vdso: Add '%rcx' and '%r11' to the syscall clobber listAmmar Faizi
The 'syscall' instruction clobbers '%rcx' and '%r11', but they are not listed in the inline Assembly that performs the syscall instruction. No real bug is found. It wasn't buggy by luck because '%rcx' and '%r11' are caller-saved registers, and not used in the functions, and the functions are never inlined. Add them to the clobber list for code correctness. Fixes: f1c2bb8b9964ed31de988910f8b1cfb586d30091 ("um: implement a x86_64 vDSO") Signed-off-by: Ammar Faizi <ammarfaizi2@gnuweeb.org> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-02-10rust: arch/um: Disable FP/SIMD instruction to match x86David Gow
The kernel disables all SSE and similar FP/SIMD instructions on x86-based architectures (partly because we shouldn't be using floats in the kernel, and partly to avoid the need for stack alignment, see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53383 ) UML does not do the same thing, which isn't in itself a problem, but does add to the list of differences between UML and "normal" x86 builds. In addition, there was a crash bug with LLVM < 15 / rustc < 1.65 when building with SSE, so disabling it fixes rust builds with earlier compiler versions, see: https://github.com/Rust-for-Linux/linux/pull/881 Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Sergio González Collado <sergio.collado@gmail.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-02-10efi: Add mixed-mode thunk recipe for GetMemoryAttributesArd Biesheuvel
EFI mixed mode on x86 requires a recipe for each protocol method or firmware service that takes u64 arguments by value, or returns pointer or 'native int' (UINTN) values by reference (e.g,, through a void ** or unsigned long * parameter), due to the fact that these types cannot be translated 1:1 between the i386 and MS x64 calling conventions. So add the missing recipe for GetMemoryAttributes, which is not actually being used yet on x86, but the code exists and can be built for x86 so let's make sure it works as it should. Cc: Evgeniy Baskov <baskov@ispras.ru> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-02-10KVM: x86: Mitigate the cross-thread return address predictions bugTom Lendacky
By default, KVM/SVM will intercept attempts by the guest to transition out of C0. However, the KVM_CAP_X86_DISABLE_EXITS capability can be used by a VMM to change this behavior. To mitigate the cross-thread return address predictions bug (X86_BUG_SMT_RSB), a VMM must not be allowed to override the default behavior to intercept C0 transitions. Use a module parameter to control the mitigation on processors that are vulnerable to X86_BUG_SMT_RSB. If the processor is vulnerable to the X86_BUG_SMT_RSB bug and the module parameter is set to mitigate the bug, KVM will not allow the disabling of the HLT, MWAIT and CSTATE exits. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Message-Id: <4019348b5e07148eb4d593380a5f6713b93c9a16.1675956146.git.thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-10x86/speculation: Identify processors vulnerable to SMT RSB predictionsTom Lendacky
Certain AMD processors are vulnerable to a cross-thread return address predictions bug. When running in SMT mode and one of the sibling threads transitions out of C0 state, the other sibling thread could use return target predictions from the sibling thread that transitioned out of C0. The Spectre v2 mitigations cover the Linux kernel, as it fills the RSB when context switching to the idle thread. However, KVM allows a VMM to prevent exiting guest mode when transitioning out of C0. A guest could act maliciously in this situation, so create a new x86 BUG that can be used to detect if the processor is vulnerable. Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Message-Id: <91cec885656ca1fcd4f0185ce403a53dd9edecb7.1675956146.git.thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-10crypto: x86/blowfish - Eliminate use of SYM_TYPED_FUNC_START in asmPeter Lafreniere
Now that we use the ECB/CBC macros, none of the asm functions in blowfish-x86_64 are called indirectly. So we can safely use SYM_FUNC_START instead of SYM_TYPED_FUNC_START with no effect, allowing us to remove an include. Signed-off-by: Peter Lafreniere <peter@n8pjl.ca> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-10crypto: x86/blowfish - Convert to use ECB/CBC helpersPeter Lafreniere
We can simplify the blowfish-x86_64 glue code by using the preexisting ECB/CBC helper macros. Additionally, this allows for easier reuse of asm functions in later x86 implementations of blowfish. This involves: 1 - Modifying blowfish_dec_blk_4way() to xor outputs when a flag is passed. 2 - Renaming blowfish_dec_blk_4way() to __blowfish_dec_blk_4way(). 3 - Creating two wrapper functions around __blowfish_dec_blk_4way() for use in the ECB/CBC macros. 4 - Removing the custom ecb_encrypt() and cbc_encrypt() routines in favor of macro-based routines. Signed-off-by: Peter Lafreniere <peter@n8pjl.ca> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-10crypto: x86/blowfish - Remove unused encode parameterPeter Lafreniere
The blowfish-x86_64 encryption functions have an unused argument. Remove it. This involves: 1 - Removing xor_block() macros. 2 - Removing handling of fourth argument from __blowfish_enc_blk{,_4way}() functions. 3 - Renaming __blowfish_enc_blk{,_4way}() to blowfish_enc_blk{,_4way}(). 4 - Removing the blowfish_enc_blk{,_4way}() wrappers from blowfish_glue.c 5 - Temporarily using SYM_TYPED_FUNC_START for now indirectly-callable encode functions. Signed-off-by: Peter Lafreniere <peter@n8pjl.ca> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-09mm, arch: add generic implementation of pfn_valid() for FLATMEMMike Rapoport (IBM)
Every architecture that supports FLATMEM memory model defines its own version of pfn_valid() that essentially compares a pfn to max_mapnr. Use mips/powerpc version implemented as static inline as a generic implementation of pfn_valid() and drop its per-architecture definitions. [rppt@kernel.org: fix the generic pfn_valid()] Link: https://lkml.kernel.org/r/Y9lg7R1Yd931C+y5@kernel.org Link: https://lkml.kernel.org/r/20230129124235.209895-5-rppt@kernel.org Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Guo Ren <guoren@kernel.org> [csky] Acked-by: Huacai Chen <chenhuacai@loongson.cn> [LoongArch] Acked-by: Stafford Horne <shorne@gmail.com> [OpenRISC] Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc] Reviewed-by: David Hildenbrand <david@redhat.com> Tested-by: Conor Dooley <conor.dooley@microchip.com> Cc: Brian Cain <bcain@quicinc.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Helge Deller <deller@gmx.de> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Vineet Gupta <vgupta@kernel.org> Cc: WANG Xuerui <kernel@xen0n.name> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-02-09mm: introduce __vm_flags_mod and use it in untrack_pfnSuren Baghdasaryan
There are scenarios when vm_flags can be modified without exclusive mmap_lock, such as: - after VMA was isolated and mmap_lock was downgraded or dropped - in exit_mmap when there are no other mm users and locking is unnecessary Introduce __vm_flags_mod to avoid assertions when the caller takes responsibility for the required locking. Pass a hint to untrack_pfn to conditionally use __vm_flags_mod for flags modification to avoid assertion. Link: https://lkml.kernel.org/r/20230126193752.297968-7-surenb@google.com Signed-off-by: Suren Baghdasaryan <surenb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arjun Roy <arjunroy@google.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Howells <dhowells@redhat.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: David Rientjes <rientjes@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Greg Thelen <gthelen@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Joel Fernandes <joelaf@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Laurent Dufour <ldufour@linux.ibm.com> Cc: Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Minchan Kim <minchan@google.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Peter Oskolkov <posk@google.com> Cc: Peter Xu <peterx@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Punit Agrawal <punit.agrawal@bytedance.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Sebastian Reichel <sebastian.reichel@collabora.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Cc: Song Liu <songliubraving@fb.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-02-09mm: replace vma->vm_flags direct modifications with modifier callsSuren Baghdasaryan
Replace direct modifications to vma->vm_flags with calls to modifier functions to be able to track flag changes and to keep vma locking correctness. [akpm@linux-foundation.org: fix drivers/misc/open-dice.c, per Hyeonggon Yoo] Link: https://lkml.kernel.org/r/20230126193752.297968-5-surenb@google.com Signed-off-by: Suren Baghdasaryan <surenb@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Acked-by: Sebastian Reichel <sebastian.reichel@collabora.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arjun Roy <arjunroy@google.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Howells <dhowells@redhat.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: David Rientjes <rientjes@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Greg Thelen <gthelen@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Joel Fernandes <joelaf@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Laurent Dufour <ldufour@linux.ibm.com> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Minchan Kim <minchan@google.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Peter Oskolkov <posk@google.com> Cc: Peter Xu <peterx@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Punit Agrawal <punit.agrawal@bytedance.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Shakeel Butt <shakeelb@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Cc: Song Liu <songliubraving@fb.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-02-09Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
net/devlink/leftover.c / net/core/devlink.c: 565b4824c39f ("devlink: change port event netdev notifier from per-net to global") f05bd8ebeb69 ("devlink: move code to a dedicated directory") 687125b5799c ("devlink: split out core code") https://lore.kernel.org/all/20230208094657.379f2b1a@canb.auug.org.au/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-09efi: x86: Wire up IBT annotation in memory attributes tableArd Biesheuvel
UEFI v2.10 extends the EFI memory attributes table with a flag that indicates whether or not all RuntimeServicesCode regions were constructed with ENDBR landing pads, permitting the OS to map these regions with IBT restrictions enabled. So let's take this into account on x86 as well. Suggested-by: Peter Zijlstra <peterz@infradead.org> # ibt_save() changes Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2023-02-08x86/cpu: Add Lunar Lake MKan Liang
Intel confirmed the existence of this CPU in Q4'2022 earnings presentation. Add the CPU model number. [ dhansen: Merging these as soon as possible makes it easier on all the folks developing model-specific features. ] Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230208172340.158548-1-tony.luck%40intel.com
2023-02-08x86/kprobes: Fix 1 byte conditional jump targetNadav Amit
Commit 3bc753c06dd0 ("kbuild: treat char as always unsigned") broke kprobes. Setting a probe-point on 1 byte conditional jump can cause the kernel to crash when the (signed) relative jump offset gets treated as unsigned. Fix by replacing the unsigned 'immediate.bytes' (plus a cast) with the signed 'immediate.value' when assigning to the relative jump offset. [ dhansen: clarified changelog ] Fixes: 3bc753c06dd0 ("kbuild: treat char as always unsigned") Suggested-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Suggested-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Nadav Amit <namit@vmware.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20230208071708.4048-1-namit%40vmware.com