summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2025-04-17perf/x86/intel: Add PMU support for Clearwater ForestDapeng Mi
From the PMU's perspective, Clearwater Forest is similar to the previous generation Sierra Forest. The key differences are the ARCH PEBS feature and the new added 3 fixed counters for topdown L1 metrics events. The ARCH PEBS is supported in the following patches. This patch provides support for basic perfmon features and 3 new added fixed counters. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20250415114428.341182-3-dapeng1.mi@linux.intel.com
2025-04-17Merge branch 'perf/urgent' into perf/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-04-17perf/x86/intel: Add Panther Lake supportKan Liang
From PMU's perspective, Panther Lake is similar to the previous generation Lunar Lake. Both are hybrid platforms, with e-core and p-core. The key differences are the ARCH PEBS feature and several new events. The ARCH PEBS is supported in the following patches. The new events will be supported later in perf tool. Share the code path with the Lunar Lake. Only update the name. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20250415114428.341182-2-dapeng1.mi@linux.intel.com
2025-04-17perf/x86/intel: Allow to update user space GPRs from PEBS recordsDapeng Mi
Currently when a user samples user space GPRs (--user-regs option) with PEBS, the user space GPRs actually always come from software PMI instead of from PEBS hardware. This leads to the sampled GPRs to possibly be inaccurate for single PEBS record case because of the skid between counter overflow and GPRs sampling on PMI. For the large PEBS case, it is even worse. If user sets the exclude_kernel attribute, large PEBS would be used to sample user space GPRs, but since PEBS GPRs group is not really enabled, it leads to all samples in the large PEBS record to share the same piece of user space GPRs, like this reproducer shows: $ perf record -e branches:pu --user-regs=ip,ax -c 100000 ./foo $ perf report -D | grep "AX" .... AX 0x000000003a0d4ead .... AX 0x000000003a0d4ead .... AX 0x000000003a0d4ead .... AX 0x000000003a0d4ead .... AX 0x000000003a0d4ead .... AX 0x000000003a0d4ead .... AX 0x000000003a0d4ead .... AX 0x000000003a0d4ead .... AX 0x000000003a0d4ead .... AX 0x000000003a0d4ead .... AX 0x000000003a0d4ead So enable GPRs group for user space GPRs sampling and prioritize reading GPRs from PEBS. If the PEBS sampled GPRs is not user space GPRs (single PEBS record case), perf_sample_regs_user() modifies them to user space GPRs. [ mingo: Clarified the changelog. ] Fixes: c22497f5838c ("perf/x86/intel: Support adaptive PEBS v4") Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250415104135.318169-2-dapeng1.mi@linux.intel.com
2025-04-17perf/x86/intel: Don't clear perf metrics overflow bit unconditionallyDapeng Mi
The below code would always unconditionally clear other status bits like perf metrics overflow bit once PEBS buffer overflows: status &= intel_ctrl | GLOBAL_STATUS_TRACE_TOPAPMI; This is incorrect. Perf metrics overflow bit should be cleared only when fixed counter 3 in PEBS counter group. Otherwise perf metrics overflow could be missed to handle. Closes: https://lore.kernel.org/all/20250225110012.GK31462@noisy.programming.kicks-ass.net/ Fixes: 7b2c05a15d29 ("perf/x86/intel: Generic support for hardware TopDown metrics") Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250415104135.318169-1-dapeng1.mi@linux.intel.com
2025-04-17perf/x86/intel/uncore: Fix the scale of IIO free running counters on SPRKan Liang
The scale of IIO bandwidth in free running counters is inherited from the ICX. The counter increments for every 32 bytes rather than 4 bytes. The IIO bandwidth out free running counters don't increment with a consistent size. The increment depends on the requested size. It's impossible to find a fixed increment. Remove it from the event_descs. Fixes: 0378c93a92e2 ("perf/x86/intel/uncore: Support IIO free-running counters on Sapphire Rapids server") Reported-by: Tang Jun <dukang.tj@alibaba-inc.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250416142426.3933977-3-kan.liang@linux.intel.com
2025-04-17perf/x86/intel/uncore: Fix the scale of IIO free running counters on ICXKan Liang
There was a mistake in the ICX uncore spec too. The counter increments for every 32 bytes rather than 4 bytes. The same as SNR, there are 1 ioclk and 8 IIO bandwidth in free running counters. Reuse the snr_uncore_iio_freerunning_events(). Fixes: 2b3b76b5ec67 ("perf/x86/intel/uncore: Add Ice Lake server uncore support") Reported-by: Tang Jun <dukang.tj@alibaba-inc.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250416142426.3933977-2-kan.liang@linux.intel.com
2025-04-17perf/x86/intel/uncore: Fix the scale of IIO free running counters on SNRKan Liang
There was a mistake in the SNR uncore spec. The counter increments for every 32 bytes of data sent from the IO agent to the SOC, not 4 bytes which was documented in the spec. The event list has been updated: "EventName": "UNC_IIO_BANDWIDTH_IN.PART0_FREERUN", "BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC", Update the scale of the IIO bandwidth in free running counters as well. Fixes: 210cc5f9db7a ("perf/x86/intel/uncore: Add uncore support for Snow Ridge server") Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250416142426.3933977-1-kan.liang@linux.intel.com
2025-04-17x86/boot/startup: Disable LTO for the startup codeNathan Chancellor
When building with CONFIG_LTO_CLANG, there is an error in the x86 boot startup code because it builds with a different code model than the rest of the kernel: ld.lld: error: Function Import: link error: linking module flags 'Code Model': IDs have conflicting values: 'i32 2' from vmlinux.a(head64.o at 1302448), and 'i32 1' from vmlinux.a(map_kernel.o at 1314208) ld.lld: error: Function Import: link error: linking module flags 'Code Model': IDs have conflicting values: 'i32 2' from vmlinux.a(common.o at 1306108), and 'i32 1' from vmlinux.a(gdt_idt.o at 1314148) As this directory is for code that only runs during early system initialization, LTO is not very important, so filter out the LTO flags from KBUILD_CFLAGS for arch/x86/boot/startup to resolve the build error. Fixes: 4cecebf200ef ("x86/boot: Move the early GDT/IDT setup code into startup/") Reported-by: Linux Kernel Functional Testing <lkft@linaro.org> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: llvm@lists.linux.dev Link: https://lore.kernel.org/r/20250414-x86-boot-startup-lto-error-v1-1-7c8bed7c131c@kernel.org Closes: https://lore.kernel.org/CA+G9fYvnun+bhYgtt425LWxzOmj+8Jf3ruKeYxQSx-F6U7aisg@mail.gmail.com/
2025-04-17powerpc/pseries: Add a char driver for physical-attestation RTASHaren Myneni
The RTAS call ibm,physical-attestation is used to retrieve information about the trusted boot state of the firmware and hypervisor on the system, and also Trusted Platform Modules (TPM) data if the system is TCG 2.0 compliant. This RTAS interface expects the caller to define different command structs such as RetrieveTPMLog, RetrievePlatformCertificat and etc, in a work area with a maximum size of 4K bytes and the response buffer will be returned in the same work area. The current implementation of this RTAS function is in the user space but allocation of the work area is restricted with the system lockdown. So this patch implements this RTAS function in the kernel and expose to the user space with open/ioctl/read interfaces. PAPR (2.13+ 21.3 ibm,physical-attestation) defines RTAS function: - Pass the command struct to obtain the response buffer for the specific command. - This RTAS function is sequence RTAS call and has to issue RTAS call multiple times to get the complete response buffer (max 64K). The hypervisor expects the first RTAS call with the sequence 1 and the subsequent calls with the sequence number returned from the previous calls. Expose these interfaces to user space with a /dev/papr-physical-attestation character device using the following programming model: int devfd = open("/dev/papr-physical-attestation"); int fd = ioctl(devfd, PAPR_PHY_ATTEST_IOC_HANDLE, struct papr_phy_attest_io_block); - The user space defines the command struct and requests the response for any command. - Obtain the complete response buffer and returned the buffer as blob to the command specific FD. size = read(fd, buf, len); - Can retrieve the response buffer once or multiple times until the end of BLOB buffer. Implemented this new kernel ABI support in librtas library for system lockdown Signed-off-by: Haren Myneni <haren@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416225743.596462-8-haren@linux.ibm.com
2025-04-17powerpc/pseries: Add papr-platform-dump character driver for dump retrievalHaren Myneni
ibm,platform-dump RTAS call in combination with writable mapping /dev/mem is issued to collect platform dump from the hypervisor and may need multiple calls to get the complete dump. The current implementation uses rtas_platform_dump() API provided by librtas library to issue these RTAS calls. But /dev/mem access by the user space is prohibited under system lockdown. The solution should be to restrict access to RTAS function in user space and provide kernel interfaces to collect dump. This patch adds papr-platform-dump character driver and expose standard interfaces such as open / ioctl/ read to user space in ways that are compatible with lockdown. PAPR (7.3.3.4.1 ibm,platform-dump) provides a method to obtain the complete dump: - Each dump will be identified by ID called dump tag. - A sequence of RTAS calls have to be issued until retrieve the complete dump. The hypervisor expects the first RTAS call with the sequence 0 and the subsequent calls with the sequence number returned from the previous calls. - The hypervisor returns "dump complete" status once the complete dump is retrieved. But expects one more RTAS call from the partition with the NULL buffer to invalidate dump which means the dump will be removed in the hypervisor. - Sequence of calls are allowed with different dump IDs at the same time but not with the same dump ID. Expose these interfaces to user space with a /dev/papr-platform-dump character device using the following programming model: int devfd = open("/dev/papr-platform-dump", O_RDONLY); int fd = ioctl(devfd,PAPR_PLATFORM_DUMP_IOC_CREATE_HANDLE, &dump_id) - Restrict user space to access with the same dump ID. Typically we do not expect user space requests the dump again for the same dump ID. char *buf = malloc(size); length = read(fd, buf, size); - size should be minimum 1K based on PAPR and <= 4K based on RTAS work area size. It will be restrict to RTAS work area size. Using 4K work area based on the current implementation in librtas library - Each read call issue RTAS call to get the data based on the size requirement and returns bytes returned from the hypervisor - If the previous call returns dump complete status, the next read returns 0 like EOF. ret = ioctl(PAPR_PLATFORM_DUMP_IOC_INVALIDATE, &dump_id) - RTAS call with NULL buffer to invalidates the dump. The read API should use the file descriptor obtained from ioctl based on dump ID so that gets dump contents for the corresponding dump ID. Implemented support in librtas (rtas_platform_dump()) for this new ABI to support system lockdown. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Tested-by: Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416225743.596462-7-haren@linux.ibm.com
2025-04-17powerpc/pseries: Add ibm,get-dynamic-sensor-state RTAS call supportHaren Myneni
The RTAS call ibm,get-dynamic-sensor-state is used to get the sensor state identified by the location code and the sensor token. The librtas library provides an API rtas_get_dynamic_sensor() which uses /dev/mem access for work area allocation but is restricted under system lockdown. This patch provides an interface with new ioctl PAPR_DYNAMIC_SENSOR_IOC_GET to the papr-indices character driver which executes this HCALL and copies the sensor state in the user specified ioctl buffer. Refer PAPR 7.3.19 ibm,get-dynamic-sensor-state for more information on this RTAS call. - User input parameters to the RTAS call: location code string and the sensor token Expose these interfaces to user space with a /dev/papr-indices character device using the following programming model: int fd = open("/dev/papr-indices", O_RDWR); int ret = ioctl(fd, PAPR_DYNAMIC_SENSOR_IOC_GET, struct papr_indices_io_block) - The user space specifies input parameters in papr_indices_io_block struct - Returned state for the specified sensor is copied to papr_indices_io_block.dynamic_param.state Signed-off-by: Haren Myneni <haren@linux.ibm.com> Tested-by: Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416225743.596462-6-haren@linux.ibm.com
2025-04-17powerpc/pseries: Add ibm,set-dynamic-indicator RTAS call supportHaren Myneni
The RTAS call ibm,set-dynamic-indicator is used to set the new indicator state identified by a location code. The current implementation uses rtas_set_dynamic_indicator() API provided by librtas library which allocates RMO buffer and issue this RTAS call in the user space. But /dev/mem access by the user space is prohibited under system lockdown. This patch provides an interface with new ioctl PAPR_DYNAMIC_INDICATOR_IOC_SET to the papr-indices character driver and expose this interface to the user space that is compatible with lockdown. Refer PAPR 7.3.18 ibm,set-dynamic-indicator for more information on this RTAS call. - User input parameters to the RTAS call: location code string, indicator token and new state Expose these interfaces to user space with a /dev/papr-indices character device using the following programming model: int fd = open("/dev/papr-indices", O_RDWR); int ret = ioctl(fd, PAPR_DYNAMIC_INDICATOR_IOC_SET, struct papr_indices_io_block) - The user space passes input parameters in papr_indices_io_block struct Signed-off-by: Haren Myneni <haren@linux.ibm.com> Tested-by: Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416225743.596462-5-haren@linux.ibm.com
2025-04-17powerpc/pseries: Add papr-indices char driver for ibm,get-indicesHaren Myneni
The RTAS call ibm,get-indices is used to obtain indices and location codes for a specified indicator or sensor token. The current implementation uses rtas_get_indices() API provided by librtas library which allocates RMO buffer and issue this RTAS call in the user space. But writable mapping /dev/mem access by the user space is prohibited under system lockdown. To overcome the restricted access in the user space, the kernel provide interfaces to collect indices data from the hypervisor. This patch adds papr-indices character driver and expose standard interfaces such as open / ioctl/ read to user space in ways that are compatible with lockdown. PAPR (2.13 7.3.17 ibm,get-indices RTAS Call) describes the following steps to retrieve all indices data: - User input parameters to the RTAS call: sensor or indicator, and indice type - ibm,get-indices is sequence RTAS call which means has to issue multiple times to get the entire list of indicators or sensors of a particular type. The hypervisor expects the first RTAS call with the sequence 1 and the subsequent calls with the sequence number returned from the previous calls. - The OS may not interleave calls to ibm,get-indices for different indicator or sensor types. Means other RTAS calls with different type should not be issued while the previous type sequence is in progress. So collect the entire list of indices and copied to buffer BLOB during ioctl() and expose this buffer to the user space with the file descriptor. - The hypervisor fills the work area with a specific format but does not return the number of bytes written to the buffer. Instead of parsing the data for each call to determine the data length, copy the work area size (RTAS_GET_INDICES_BUF_SIZE) to the buffer. Return work-area size of data to the user space for each read() call. Expose these interfaces to user space with a /dev/papr-indices character device using the following programming model: int devfd = open("/dev/papr-indices", O_RDONLY); int fd = ioctl(devfd, PAPR_INDICES_IOC_GET, struct papr_indices_io_block) - Collect all indices data for the specified token to the buffer char *buf = malloc(RTAS_GET_INDICES_BUF_SIZE); length = read(fd, buf, RTAS_GET_INDICES_BUF_SIZE) - RTAS_GET_INDICES_BUF_SIZE of data is returned to the user space. - The user space retrieves the indices and their location codes from the buffer - Should issue multiple read() calls until reaches the end of BLOB buffer. The read() should use the file descriptor obtained from ioctl to get the data that is exposed to file descriptor. Implemented support in librtas (rtas_get_indices()) for this new ABI for system lockdown. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Tested-by: Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416225743.596462-4-haren@linux.ibm.com
2025-04-17powerpc/pseries: Define papr_indices_io_block for papr-indices ioctlsHaren Myneni
To issue ibm,get-indices, ibm,set-dynamic-indicator and ibm,get-dynamic-sensor-state in the user space, the RMO buffer is allocated for the work area which is restricted under system lockdown. So instead of user space execution, the kernel will provide /dev/papr-indices interface to execute these RTAS calls. The user space assigns data in papr_indices_io_block struct depends on the specific HCALL and passes to the following ioctls: PAPR_INDICES_IOC_GET: Use for ibm,get-indices. Returns a get-indices handle fd to read data. PAPR_DYNAMIC_SENSOR_IOC_GET: Use for ibm,get-dynamic-sensor-state. Updates the sensor state in papr_indices_io_block.dynamic_param.state PAPR_DYNAMIC_INDICATOR_IOC_SET: Use for ibm,set-dynamic-indicator. Sets the new state for the input indicator. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Tested-by: Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416225743.596462-3-haren@linux.ibm.com
2025-04-17powerpc/pseries: Define common functions for RTAS sequence callsHaren Myneni
The RTAS call can be normal where retrieves the data form the hypervisor once or sequence based RTAS call which has to issue multiple times until the complete data is obtained. For some of these sequence RTAS calls, the OS should not interleave calls with different input until the sequence is completed. The data is collected for each call and copy to the buffer for the entire sequence during ioctl() handle and then expose this buffer to the user space with read() handle. One such sequence RTAS call is ibm,get-vpd and its support is already included in the current code. To add the similar support for other sequence based calls, move the common functions in to separate file and update papr_rtas_sequence struct with the following callbacks so that RTAS call specific code will be defined and executed to complete the sequence. struct papr_rtas_sequence { int error; void params; void (*begin) (struct papr_rtas_sequence *); void (*end) (struct papr_rtas_sequence *); const char * (*work) (struct papr_rtas_sequence *, size_t *); }; params: Input parameters used to pass for RTAS call. Begin: RTAS call specific function to initialize data including work area allocation. End: RTAS call specific function to free up resources (free work area) after the sequence is completed. Work: The actual RTAS call specific function which collects the data from the hypervisor. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Tested-by: Sathvika Vasireddy <sv@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416225743.596462-2-haren@linux.ibm.com
2025-04-17powerpc/crc: Include uaccess.h and othersHerbert Xu
The powerpc crc code was relying on pagefault_disable from being pulled in by random header files. Fix this by explicitly including uaccess.h. Also add other missing header files to prevent similar problems in future. Reported-by: Eric Biggers <ebiggers@kernel.org> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Fixes: 7ba8df47810f ("asm-generic: Make simd.h more resilient") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16arm64: dts: qcom: sdm670: add camss and cciRichard Acayan
Add the camera subsystem and CCI used to interface with cameras on the Snapdragon 670. Signed-off-by: Richard Acayan <mailingradian@gmail.com> Reviewed-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org> Reviewed-by: Vladimir Zapolskiy <vladimir.zapolskiy@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com> Link: https://lore.kernel.org/r/20250205035013.206890-8-mailingradian@gmail.com Signed-off-by: Bjorn Andersson <andersson@kernel.org>
2025-04-16Merge tag 'riscv-fixes-6.15-rc3' of ↵Palmer Dabbelt
ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/alexghiti/linux into fixes riscv fixes for 6.15-rc3 - A couple of fixes regarding module relocations - Fix a build error by implementing missing alternative macros - Another fix for kexec by fixing /proc/iomem * tag 'riscv-fixes-6.15-rc3' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/alexghiti/linux: riscv: Avoid fortify warning in syscall_get_arguments() riscv: Provide all alternative macros all the time riscv: module: Allocate PLT entries for R_RISCV_PLT32 riscv: module: Fix out-of-bounds relocation access riscv: Properly export reserved regions in /proc/iomem riscv: Fix unaligned access info messages
2025-04-16x86/bugs: Rename mmio_stale_data_clear to cpu_buf_vm_clearPawan Gupta
The static key mmio_stale_data_clear controls the KVM-only mitigation for MMIO Stale Data vulnerability. Rename it to reflect its purpose. No functional change. Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/20250416-mmio-rename-v2-1-ad1f5488767c@linux.intel.com
2025-04-16powerpc: enable dynamic preemptionShrikanth Hegde
Once the lazy preemption is supported, it would be desirable to change the preemption models at runtime. So add support for dynamic preemption using DYNAMIC_KEY. ::Tested lightly on Power10 LPAR Performance numbers indicate that, preempt=none(no dynamic) and preempt=none(dynamic) are close. cat /sys/kernel/debug/sched/preempt (none) voluntary full lazy perf stat -e probe:__cond_resched -a sleep 1 Performance counter stats for 'system wide': 1,253 probe:__cond_resched echo full > /sys/kernel/debug/sched/preempt cat /sys/kernel/debug/sched/preempt none voluntary (full) lazy perf stat -e probe:__cond_resched -a sleep 1 Performance counter stats for 'system wide': 0 probe:__cond_resched Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250210184334.567383-2-sshegde@linux.ibm.com
2025-04-16fadump: Use str_yes_no() helper in fadump_show_config()Thorsten Blum
Remove hard-coded strings by using the str_yes_no() helper function. Reviewed-by: Sourabh Jain <sourabhjain@linux.ibm.com> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250209081704.2758-2-thorsten.blum@linux.dev
2025-04-16KVM: powerpc: Enable commented out BUILD_BUG_ON() assertionThorsten Blum
The BUILD_BUG_ON() assertion was commented out in commit 38634e676992 ("powerpc/kvm: Remove problematic BUILD_BUG_ON statement") and fixed in commit c0a187e12d48 ("KVM: powerpc: Fix BUILD_BUG_ON condition"), but not enabled. Enable it now that this no longer breaks and remove the comment. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250411084222.6916-1-thorsten.blum@linux.dev
2025-04-16powerpc: mpic: Use str_enabled_disabled() helper functionThorsten Blum
Remove hard-coded strings by using the str_enabled_disabled() helper function. Use pr_debug() instead of printk(KERN_DEBUG) to silence a checkpatch warning. Reviewed-by: Ricardo B. Marlière <ricardo@marliere.net> Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250219112053.3352-2-thorsten.blum@linux.dev
2025-04-16powerpc/ps3: Use str_write_read() in ps3_notification_read_write()Thorsten Blum
Remove hard-coded strings by using the str_write_read() helper function. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250219111445.2875-2-thorsten.blum@linux.dev
2025-04-16powerpc/kvm-hv-pmu: Add perf-events for Hostwide countersVaibhav Jain
Update 'kvm-hv-pmu.c' to add five new perf-events mapped to the five Hostwide counters. Since these newly introduced perf events are at system wide scope and can be read from any L1-Lpar CPU, 'kvmppc_pmu' scope and capabilities are updated appropriately. Also introduce two new helpers. First is kvmppc_update_l0_stats() that uses the infrastructure introduced in previous patches to issues the H_GUEST_GET_STATE hcall L0-PowerVM to fetch guest-state-buffer holding the latest values of these counters which is then parsed and 'l0_stats' variable updated. Second helper is kvmppc_pmu_event_update() which is called from 'kvmppv_pmu' callbacks and uses kvmppc_update_l0_stats() to update 'l0_stats' and the update the 'struct perf_event's event-counter. Some minor updates to kvmppc_pmu_{add, del, read}() to remove some debug scaffolding code. Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Reviewed-by: Athira Rajeev <atrajeev@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416162740.93143-7-vaibhav@linux.ibm.com
2025-04-16powerpc/kvm-hv-pmu: Implement GSB message-ops for hostwide countersVaibhav Jain
Implement and setup necessary structures to send a prepolulated Guest-State-Buffer(GSB) requesting hostwide counters to L0-PowerVM and have the returned GSB holding the values of these counters parsed. This is done via existing GSB implementation and with the newly added support of Hostwide elements in GSB. The request to L0-PowerVM to return Hostwide counters is done using a pre-allocated GSB named 'gsb_l0_stats'. To be able to populate this GSB with the needed Guest-State-Elements (GSIDs) a instance of 'struct kvmppc_gs_msg' named 'gsm_l0_stats' is introduced. The 'gsm_l0_stats' is tied to an instance of 'struct kvmppc_gs_msg_ops' named 'gsb_ops_l0_stats' which holds various callbacks to be compute the size ( hostwide_get_size() ), populate the GSB ( hostwide_fill_info() ) and refresh ( hostwide_refresh_info() ) the contents of 'l0_stats' that holds the Hostwide counters returned from L0-PowerVM. To protect these structures from simultaneous access a spinlock 'lock_l0_stats' has been introduced. The allocation and initialization of the above structures is done in newly introduced kvmppc_init_hostwide() and similarly the cleanup is performed in newly introduced kvmppc_cleanup_hostwide(). Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416162740.93143-6-vaibhav@linux.ibm.com
2025-04-16kvm powerpc/book3s-apiv2: Introduce kvm-hv specific PMUVaibhav Jain
Introduce a new PMU named 'kvm-hv' inside a new module named 'kvm-hv-pmu' to report Book3s kvm-hv specific performance counters. This will expose KVM-HV specific performance attributes to user-space via kernel's PMU infrastructure and would enableusers to monitor active kvm-hv based guests. The patch creates necessary scaffolding to for the new PMU callbacks and introduces the new kernel module name 'kvm-hv-pmu' which is built with CONFIG_KVM_BOOK3S_HV_PMU. The patch doesn't introduce any perf-events yet, which will be introduced in later patches Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416162740.93143-5-vaibhav@linux.ibm.com
2025-04-16kvm powerpc/book3s-apiv2: Add kunit tests for Hostwide GSB elementsVaibhav Jain
Update 'test-guest-state-buffer.c' to add two new KUNIT test cases for validating correctness of changes to Guest-state-buffer management infrastructure for adding support for Hostwide GSB elements. The newly introduced test test_gs_hostwide_msg() checks if the Hostwide elements can be set and parsed from a Guest-state-buffer. The second kunit test test_gs_hostwide_counters() checks if the Hostwide GSB elements can be send to the L0-PowerVM hypervisor via the H_GUEST_SET_STATE hcall and ensures that the returned guest-state-buffer has all the 5 Hostwide stat counters present. Below is the KATP test report with the newly added KUNIT tests: KTAP version 1 # Subtest: guest_state_buffer_test # module: test_guest_state_buffer 1..7 ok 1 test_creating_buffer ok 2 test_adding_element ok 3 test_gs_bitmap ok 4 test_gs_parsing ok 5 test_gs_msg ok 6 test_gs_hostwide_msg # test_gs_hostwide_counters: Guest Heap Size=0 bytes # test_gs_hostwide_counters: Guest Heap Size Max=10995367936 bytes # test_gs_hostwide_counters: Guest Page-table Size=2178304 bytes # test_gs_hostwide_counters: Guest Page-table Size Max=2147483648 bytes # test_gs_hostwide_counters: Guest Page-table Reclaim Size=0 bytes ok 7 test_gs_hostwide_counters # guest_state_buffer_test: pass:7 fail:0 skip:0 total:7 # Totals: pass:7 fail:0 skip:0 total:7 ok 1 guest_state_buffer_test Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416162740.93143-4-vaibhav@linux.ibm.com
2025-04-16kvm powerpc/book3s-apiv2: Add support for Hostwide GSB elementsVaibhav Jain
Add support for adding and parsing Hostwide elements to the Guest-state-buffer data structure used in apiv2. These elements are used to share meta-information pertaining to entire L1-Lpar and this meta-information is maintained by L0-PowerVM hypervisor. Example of this include the amount of the page-table memory currently used by L0-PowerVM for hosting the Shadow-Pagetable of all active L2-Guests. More of the are documented in kernel-documentation at [1]. The Hostwide GSB elements are currently only support with H_GUEST_SET_STATE hcall with a special flag namely 'KVMPPC_GS_FLAGS_HOST_WIDE'. The patch introduces new defs for the 5 new Hostwide GSB elements including their GSIDs as well as introduces a new class of GSB elements namely 'KVMPPC_GS_CLASS_HOSTWIDE' to indicate to GSB construction/parsing infrastructure in 'kvm/guest-state-buffer.c'. Also gs_msg_ops_vcpu_get_size(), kvmppc_gsid_type() and kvmppc_gse_{flatten,unflatten}_iden() are updated to appropriately indicate the needed size for these Hostwide GSB elements as well as how to flatten/unflatten their GSIDs so that they can be marked as available in GSB bitmap. [1] Documention/arch/powerpc/kvm-nested.rst Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250416162740.93143-3-vaibhav@linux.ibm.com
2025-04-16ARM: davinci: remove support for da830Bartosz Golaszewski
We no longer support any boards with the da830 SoC in mainline linux. Let's remove all bits and pieces related to it. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Kevin Hilman <khilman@baylibre.com> Reviewed-by: David Lechner <dlechner@baylibre.com> Link: https://lore.kernel.org/r/20250407-davinci-remove-da830-v1-1-39f803dd5a14@linaro.org Signed-off-by: Bartosz Golaszewski <brgl@bgdev.pl>
2025-04-16ARM: 9447/1: arm/memremap: fix arch_memremap_can_ram_remap()Ross Stutterheim
arm/memremap: fix arch_memremap_can_ram_remap() commit 260364d112bc ("arm[64]/memremap: don't abuse pfn_valid() to ensure presence of linear map") added the definition of arch_memremap_can_ram_remap() for arm[64] specific filtering of what pages can be used from the linear mapping. memblock_is_map_memory() was called with the pfn of the address given to arch_memremap_can_ram_remap(); however, memblock_is_map_memory() expects to be given an address for arm, not a pfn. This results in calls to memremap() returning a newly mapped area when it should return an address in the existing linear mapping. Fix this by removing the address to pfn translation and pass the address directly. Fixes: 260364d112bc ("arm[64]/memremap: don't abuse pfn_valid() to ensure presence of linear map") Signed-off-by: Ross Stutterheim <ross.stutterheim@garmin.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: stable@vger.kernel.org Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2025-04-16riscv: defconfig: Remove EXPERTJoel Stanley
The setting of EXPERT is a leftover from when the riscv defconfig was first added. As mentioned in the EXPERT Kconfig help text it is not intended to be set in the usual case. Upon removal a bunch of intrusive debug-related kernel options are no longer set, which is good. A few may want to come back in the future but let those be advocated for on a case by case basis. NAMESPACES, SYSFS_SYSCALL and MEDIA_SUPPORT_FILTER default on and thus fall out of the defconfig. Set VIDEO_CADENCE_CSI2RX=y to ensure VIDEO_CADENCE_CSI2RX stays enabled. Set DEBUG_KERNEL=y in line with other arch defconfigs. This turns on tracing. Signed-off-by: Joel Stanley <joel@jms.id.au> Link: https://lore.kernel.org/r/20250415122832.982610-1-joel@jms.id.au Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2025-04-16Merge patch series "riscv: Rework the arch_kgdb_breakpoint() implementation"Palmer Dabbelt
WangYuli <wangyuli@uniontech.com> says: 1. The arch_kgdb_breakpoint() function defines the kgdb_compiled_break symbol using inline assembly. There's a potential issue where the compiler might inline arch_kgdb_breakpoint(), which would then define the kgdb_compiled_break symbol multiple times, leading to fail to link vmlinux.o. This isn't merely a potential compilation problem. The intent here is to determine the global symbol address of kgdb_compiled_break, and if this function is inlined multiple times, it would logically be a grave error. 2. Remove ".option norvc/.option rvc" to fix a bug that the C extension would unconditionally enable even if the kernel is being built with CONFIG_RISCV_ISA_C=n. * b4-shazam-merge: riscv: KGDB: Remove ".option norvc/.option rvc" for kgdb_compiled_break riscv: KGDB: Do not inline arch_kgdb_breakpoint() Link: https://lore.kernel.org/r/D5A83DF3A06E1DF9+20250411072905.55134-1-wangyuli@uniontech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2025-04-16riscv: KGDB: Remove ".option norvc/.option rvc" for kgdb_compiled_breakWangYuli
[ Quoting Samuel Holland: ] This is a separate issue, but using ".option rvc" here is a bug. It will unconditionally enable the C extension for the rest of the file, even if the kernel is being built with CONFIG_RISCV_ISA_C=n. [ Quoting Palmer Dabbelt: ] We're just looking at the address of kgdb_compiled_break, so it's fine if it ends up as a c.ebreak. [ Quoting Alexandre Ghiti: ] .option norvc is used to prevent the assembler from using compressed instructions, but it's generally used when we need to ensure the size of the instructions that are used, which is not the case here as noted by Palmer since we only care about the address. So yes it will work fine with C enabled :) So let's just remove them all. Link: https://lore.kernel.org/all/4b4187c1-77e5-44b7-885f-d6826723dd9a@sifive.com/ Link: https://lore.kernel.org/all/mhng-69513841-5068-441d-be8f-2aeebdc56a08@palmer-ri-x1c9a/ Link: https://lore.kernel.org/all/23693e7f-4fff-40f3-a437-e06d827278a5@ghiti.fr/ Fixes: fe89bd2be866 ("riscv: Add KGDB support") Cc: Samuel Holland <samuel.holland@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Alexandre Ghiti <alex@ghiti.fr> Signed-off-by: WangYuli <wangyuli@uniontech.com> Link: https://lore.kernel.org/r/8B431C6A4626225C+20250411073222.56820-2-wangyuli@uniontech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2025-04-16riscv: KGDB: Do not inline arch_kgdb_breakpoint()WangYuli
The arch_kgdb_breakpoint() function defines the kgdb_compiled_break symbol using inline assembly. There's a potential issue where the compiler might inline arch_kgdb_breakpoint(), which would then define the kgdb_compiled_break symbol multiple times, leading to fail to link vmlinux.o. This isn't merely a potential compilation problem. The intent here is to determine the global symbol address of kgdb_compiled_break, and if this function is inlined multiple times, it would logically be a grave error. Link: https://lore.kernel.org/all/4b4187c1-77e5-44b7-885f-d6826723dd9a@sifive.com/ Link: https://lore.kernel.org/all/5b0adf9b-2b22-43fe-ab74-68df94115b9a@ghiti.fr/ Link: https://lore.kernel.org/all/23693e7f-4fff-40f3-a437-e06d827278a5@ghiti.fr/ Fixes: fe89bd2be866 ("riscv: Add KGDB support") Co-developed-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: WangYuli <wangyuli@uniontech.com> Link: https://lore.kernel.org/r/F22359AFB6FF9FD8+20250411073222.56820-1-wangyuli@uniontech.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2025-04-16RISC-V: vDSO: Wire up getrandom() vDSO implementationXi Ruoyao
Hook up the generic vDSO implementation to the generic vDSO getrandom implementation by providing the required __arch_chacha20_blocks_nostack and getrandom_syscall implementations. Also wire up the selftests. The benchmark result: vdso: 25000000 times in 2.466341333 seconds libc: 25000000 times in 41.447720005 seconds syscall: 25000000 times in 41.043926672 seconds vdso: 25000000 x 256 times in 162.286219353 seconds libc: 25000000 x 256 times in 2953.855018685 seconds syscall: 25000000 x 256 times in 2796.268546000 seconds Signed-off-by: Xi Ruoyao <xry111@xry111.site> Link: https://lore.kernel.org/r/20250411024600.16045-1-xry111@xry111.site Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2025-04-16powerpc/boot: Check for ld-option supportMadhavan Srinivasan
Commit 579aee9fc594 ("powerpc: suppress some linker warnings in recent linker versions") enabled support to add linker option "--no-warn-rwx-segments", if the version is greater than 2.39. Similar build warning were reported recently from linker version 2.35.2. ld: warning: arch/powerpc/boot/zImage.epapr has a LOAD segment with RWX permissions ld: warning: arch/powerpc/boot/zImage.pseries has a LOAD segment with RWX permissions Fix the warning by checking for "--no-warn-rwx-segments" option support in linker to enable it, instead of checking for the version range. Fixes: 579aee9fc594 ("powerpc: suppress some linker warnings in recent linker versions") Reported-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Closes: https://lore.kernel.org/linuxppc-dev/61cf556c-4947-4bd6-af63-892fc0966dad@linux.ibm.com/ Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250401004218.24869-1-maddy@linux.ibm.com
2025-04-16riscv: Avoid fortify warning in syscall_get_arguments()Nathan Chancellor
When building with CONFIG_FORTIFY_SOURCE=y and W=1, there is a warning because of the memcpy() in syscall_get_arguments(): In file included from include/linux/string.h:392, from include/linux/bitmap.h:13, from include/linux/cpumask.h:12, from arch/riscv/include/asm/processor.h:55, from include/linux/sched.h:13, from kernel/ptrace.c:13: In function 'fortify_memcpy_chk', inlined from 'syscall_get_arguments.isra' at arch/riscv/include/asm/syscall.h:66:2: include/linux/fortify-string.h:580:25: error: call to '__read_overflow2_field' declared with attribute warning: detected read beyond size of field (2nd parameter); maybe use struct_group()? [-Werror=attribute-warning] 580 | __read_overflow2_field(q_size_field, size); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors The fortified memcpy() routine enforces that the source is not overread and the destination is not overwritten if the size of either field and the size of the copy are known at compile time. The memcpy() in syscall_get_arguments() intentionally overreads from a1 to a5 in 'struct pt_regs' but this is bigger than the size of a1. Normally, this could be solved by wrapping a1 through a5 with struct_group() but there was already a struct_group() applied to these members in commit bba547810c66 ("riscv: tracing: Fix __write_overflow_field in ftrace_partial_regs()"). Just avoid memcpy() altogether and write the copying of args from regs manually, which clears up the warning at the expense of three extra lines of code. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Dmitry V. Levin <ldv@strace.io> Link: https://lore.kernel.org/r/20250409-riscv-avoid-fortify-warning-syscall_get_arguments-v1-1-7853436d4755@kernel.org Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-04-16arm64: dts: mediatek: mt8196: Add pinmux macro header fileCathy Xu
Add the pinctrl header file on MediaTek mt8196. Signed-off-by: Guodong Liu <guodong.liu@mediatek.com> Signed-off-by: Cathy Xu <ot_cathy.xu@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20250414090215.16091-3-ot_cathy.xu@mediatek.com Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2025-04-16arm64: dts: mediatek: Add MT6893 pinmux macro header fileAngeloGioacchino Del Regno
Add the required macros for the pinmux nodes of the MT6893 Dimensity 1200 SoC. Link: https://lore.kernel.org/r/20250410144044.476060-4-angelogioacchino.delregno@collabora.com Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
2025-04-16x86/fpu: Rename fpu_reset_fpregs() to fpu_reset_fpstate_regs()Chang S. Bae
The original function name came from an overly compressed form of 'fpstate_regs' by commit: e61d6310a0f8 ("x86/fpu: Reset permission and fpstate on exec()") However, the term 'fpregs' typically refers to physical FPU registers. In contrast, this function copies the init values to fpu->fpstate->regs, not hardware registers. Rename the function to better reflect what it actually does. No functional change. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-11-chang.seok.bae@intel.com
2025-04-16x86/fpu: Remove export of mxcsr_feature_maskChang S. Bae
The variable was previously referenced in KVM code but the last usage was removed by: ea4d6938d4c0 ("x86/fpu: Replace KVMs home brewed FPU copy from user") Remove its export symbol. Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-10-chang.seok.bae@intel.com
2025-04-16x86/pkeys: Simplify PKRU update in signal frameChang S. Bae
The signal delivery logic was modified to always set the PKRU bit in xregs_state->header->xfeatures by this commit: ae6012d72fa6 ("x86/pkeys: Ensure updated PKRU value is XRSTOR'd") However, the change derives the bitmask value using XGETBV(1), rather than simply updating the buffer that already holds the value. Thus, this approach induces an unnecessary dependency on XGETBV1 for PKRU handling. Eliminate the dependency by using the established helper function. Subsequently, remove the now-unused 'mask' argument. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Aruna Ramakrishna <aruna.ramakrishna@oracle.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Tony W Wang-oc <TonyWWang-oc@zhaoxin.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20250416021720.12305-9-chang.seok.bae@intel.com
2025-04-16x86/fpu: Refactor xfeature bitmask update code for sigframe XSAVEChang S. Bae
Currently, saving register states in the signal frame, the legacy feature bits are always set in xregs_state->header->xfeatures. This code sequence can be generalized for reuse in similar cases. Refactor the logic to ensure a consistent approach across similar usages. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-8-chang.seok.bae@intel.com
2025-04-16x86/fpu: Log XSAVE disablement consistentlyChang S. Bae
Not all paths that lead to fpu__init_disable_system_xstate() currently emit a message indicating that XSAVE has been disabled. Move the print statement into the function to ensure the message in all cases. Suggested-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250416021720.12305-7-chang.seok.bae@intel.com
2025-04-16x86/fpu/apx: Enable APX state supportChang S. Bae
With securing APX against conflicting MPX, it is now ready to be enabled. Include APX in the enabled xfeature set. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-5-chang.seok.bae@intel.com
2025-04-16x86/fpu/apx: Disallow conflicting MPX presenceChang S. Bae
XSTATE components are architecturally independent. There is no rule requiring their offsets in the non-compacted format to be strictly ascending or mutually non-overlapping. However, in practice, such overlaps have not occurred -- until now. APX is introduced as xstate component 19, following AMX. In the non-compacted XSAVE format, its offset overlaps with the space previously occupied by the now-deprecated MPX feature: 45fc24e89b7c ("x86/mpx: remove MPX from arch/x86") To prevent conflicts, the kernel must ensure the CPU never expose both features at the same time. If so, it indicates unreliable hardware. In such cases, XSAVE should be disabled entirely as a precautionary measure. Add a sanity check to detect this condition and disable XSAVE if an invalid hardware configuration is identified. Note: MPX state components remain enabled on legacy systems solely for KVM guest support. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-4-chang.seok.bae@intel.com
2025-04-16x86/fpu/apx: Define APX state componentChang S. Bae
Advanced Performance Extensions (APX) is associated with a new state component number 19. To support saving and restoring of the corresponding registers via the XSAVE mechanism, introduce the component definition along with the necessary sanity checks. Define the new component number, state name, and those register data type. Then, extend the size checker to validate the register data type and explicitly list the APX feature flag as a dependency for the new component in xsave_cpuid_features[]. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-3-chang.seok.bae@intel.com
2025-04-16x86/cpufeatures: Add X86_FEATURE_APXChang S. Bae
Intel Advanced Performance Extensions (APX) introduce a new set of general-purpose registers, managed as an extended state component via the xstate management facility. Before enabling this new xstate, define a feature flag to clarify the dependency in xsave_cpuid_features[]. APX is enumerated under CPUID level 7 with EDX=1. Since this CPUID leaf is not yet allocated, place the flag in a scattered feature word. While this feature is intended only for userspace, exposing it via /proc/cpuinfo is unnecessary. Instead, the existing arch_prctl(2) mechanism with the ARCH_GET_XCOMP_SUPP option can be used to query the feature availability. Finally, clarify that APX depends on XSAVE. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20250416021720.12305-2-chang.seok.bae@intel.com