summaryrefslogtreecommitdiff
path: root/arch/x86/kernel/cpu/intel.c
AgeCommit message (Collapse)Author
2025-03-25Merge tag 'crc-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux Pull CRC updates from Eric Biggers: "Another set of improvements to the kernel's CRC (cyclic redundancy check) code: - Rework the CRC64 library functions to be directly optimized, like what I did last cycle for the CRC32 and CRC-T10DIF library functions - Rewrite the x86 PCLMULQDQ-optimized CRC code, and add VPCLMULQDQ support and acceleration for crc64_be and crc64_nvme - Rewrite the riscv Zbc-optimized CRC code, and add acceleration for crc_t10dif, crc64_be, and crc64_nvme - Remove crc_t10dif and crc64_rocksoft from the crypto API, since they are no longer needed there - Rename crc64_rocksoft to crc64_nvme, as the old name was incorrect - Add kunit test cases for crc64_nvme and crc7 - Eliminate redundant functions for calculating the Castagnoli CRC32, settling on just crc32c() - Remove unnecessary prompts from some of the CRC kconfig options - Further optimize the x86 crc32c code" * tag 'crc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: (36 commits) x86/crc: drop the avx10_256 functions and rename avx10_512 to avx512 lib/crc: remove unnecessary prompt for CONFIG_CRC64 lib/crc: remove unnecessary prompt for CONFIG_LIBCRC32C lib/crc: remove unnecessary prompt for CONFIG_CRC8 lib/crc: remove unnecessary prompt for CONFIG_CRC7 lib/crc: remove unnecessary prompt for CONFIG_CRC4 lib/crc7: unexport crc7_be_syndrome_table lib/crc_kunit.c: update comment in crc_benchmark() lib/crc_kunit.c: add test and benchmark for crc7_be() x86/crc32: optimize tail handling for crc32c short inputs riscv/crc64: add Zbc optimized CRC64 functions riscv/crc-t10dif: add Zbc optimized CRC-T10DIF function riscv/crc32: reimplement the CRC32 functions using new template riscv/crc: add "template" for Zbc optimized CRC functions x86/crc: add ANNOTATE_NOENDBR to suppress objtool warnings x86/crc32: improve crc32c_arch() code generation with clang x86/crc64: implement crc64_be and crc64_nvme using new template x86/crc-t10dif: implement crc_t10dif using new template x86/crc32: implement crc32_le using new template x86/crc: add "template" for [V]PCLMULQDQ based CRC functions ...
2025-03-19x86/cpu/intel: Limit the non-architectural constant_tsc model checksSohil Mehta
X86_FEATURE_CONSTANT_TSC is a Linux-defined, synthesized feature flag. It is used across several vendors. Intel CPUs will set the feature when the architectural CPUID.80000007.EDX[1] bit is set. There are also some Intel CPUs that have the X86_FEATURE_CONSTANT_TSC behavior but don't enumerate it with the architectural bit. Those currently have a model range check. Today, virtually all of the CPUs that have the CPUID bit *also* match the "model >= 0x0e" check. This is confusing. Instead of an open-ended check, pick some models (INTEL_IVYBRIDGE and P4_WILLAMETTE) as the end of goofy CPUs that should enumerate the bit but don't. These models are relatively arbitrary but conservative pick for this. This makes it obvious that later CPUs (like Family 18+) no longer need to synthesize X86_FEATURE_CONSTANT_TSC. Signed-off-by: Sohil Mehta <sohil.mehta@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20250219184133.816753-14-sohil.mehta@intel.com
2025-03-19x86/cpu/intel: Fix fast string initialization for extended FamiliesSohil Mehta
X86_FEATURE_REP_GOOD is a linux defined feature flag to track whether fast string operations should be used for copy_page(). It is also used as a second alternative for clear_page() if enhanced fast string operations (ERMS) are not available. X86_FEATURE_ERMS is an Intel-specific hardware-defined feature flag that tracks hardware support for Enhanced Fast strings. It is used to track whether Fast strings should be used for similar memory copy and memory clearing operations. On top of these, there is a FAST_STRING enable bit in the IA32_MISC_ENABLE MSR. It is typically controlled by the BIOS to provide a hint to the hardware and the OS on whether fast string operations are preferred. Commit: 161ec53c702c ("x86, mem, intel: Initialize Enhanced REP MOVSB/STOSB") introduced a mechanism to honor the BIOS preference for fast string operations and clear the above feature flags if needed. Unfortunately, the current initialization code for Intel to set and clear these bits is confusing at best and likely incorrect. X86_FEATURE_REP_GOOD is cleared in early_init_intel() if MISC_ENABLE.FAST_STRING is 0. But it gets set later on unconditionally for all Family 6 processors in init_intel(). This not only overrides the BIOS preference but also contradicts the earlier check. Fix this by combining the related checks and always relying on the BIOS provided preference for fast string operations. This simplification makes sure the upcoming Intel Family 18 and 19 models are covered as well. Signed-off-by: Sohil Mehta <sohil.mehta@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250219184133.816753-12-sohil.mehta@intel.com
2025-03-19x86/cpu/intel: Replace Family 5 model checks with VFM onesSohil Mehta
Introduce names for some Family 5 models and convert some of the checks to be VFM based. Also, to keep the file sorted by family, move Family 5 to the top of the header file. Signed-off-by: Sohil Mehta <sohil.mehta@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20250219184133.816753-8-sohil.mehta@intel.com
2025-03-19x86/cpu/intel: Replace Family 15 checks with VFM onesSohil Mehta
Introduce names for some old pentium 4 models and replace the x86_model checks with VFM ones. Signed-off-by: Sohil Mehta <sohil.mehta@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20250219184133.816753-7-sohil.mehta@intel.com
2025-03-19x86/cpu/intel: Replace early Family 6 checks with VFM onesSohil Mehta
Introduce names for some old pentium models and replace the x86_model checks with VFM ones. Signed-off-by: Sohil Mehta <sohil.mehta@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20250219184133.816753-6-sohil.mehta@intel.com
2025-03-19x86/cpu/intel: Fix the MOVSL alignment preference for extended FamiliesSohil Mehta
The alignment preference for 32-bit MOVSL based bulk memory move has been 8-byte for a long time. However this preference is only set for Family 6 and 15 processors. Use the same preference for upcoming Family numbers 18 and 19. Also, use a simpler VFM based check instead of switching based on Family numbers. Refresh the comment to reflect the new check. Signed-off-by: Sohil Mehta <sohil.mehta@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250219184133.816753-3-sohil.mehta@intel.com
2025-03-04x86/cpu: Remove unused TLB stringsThomas Gleixner
Commit: e0ba94f14f74 ("x86/tlb_info: get last level TLB entry number of CPU") added the TLB table for parsing CPUID(0x4), including strings describing them. The string entry in the table was never used. Convert them to comments. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250304085152.51092-10-darwi@linutronix.de
2025-03-04x86/cpu: Simplify TLB entry count storageAhmed S. Darwish
Commit: e0ba94f14f74 ("x86/tlb_info: get last level TLB entry number of CPU") introduced u16 "info" arrays for each TLB type. Since 2012 and each array stores just one type of information: the number of TLB entries for its respective TLB type. Replace such arrays with simple variables. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250304085152.51092-8-darwi@linutronix.de
2025-03-04x86/cpu: Use max() for CPUID leaf 0x2 TLB descriptors parsingAhmed S. Darwish
The conditional statement "if (x < y) { x = y; }" appears 22 times at the Intel leaf 0x2 descriptors parsing logic. Replace each of such instances with a max() expression to simplify the code. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250304085152.51092-7-darwi@linutronix.de
2025-03-04x86/cpu: Remove unnecessary headers and reorder the restAhmed S. Darwish
Remove the headers at intel.c that are no longer required. Alphabetically reorder what remains since more headers will be included in further commits. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250304085152.51092-6-darwi@linutronix.de
2025-03-04Merge branch 'x86/urgent' into x86/cpu, to pick up dependent commitsIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-03-04x86/cpu: Properly parse CPUID leaf 0x2 TLB descriptor 0x63Ahmed S. Darwish
CPUID leaf 0x2's one-byte TLB descriptors report the number of entries for specific TLB types, among other properties. Typically, each emitted descriptor implies the same number of entries for its respective TLB type(s). An emitted 0x63 descriptor is an exception: it implies 4 data TLB entries for 1GB pages and 32 data TLB entries for 2MB or 4MB pages. For the TLB descriptors parsing code, the entry count for 1GB pages is encoded at the intel_tlb_table[] mapping, but the 2MB/4MB entry count is totally ignored. Update leaf 0x2's parsing logic 0x2 to account for 32 data TLB entries for 2MB/4MB pages implied by the 0x63 descriptor. Fixes: e0ba94f14f74 ("x86/tlb_info: get last level TLB entry number of CPU") Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250304085152.51092-4-darwi@linutronix.de
2025-03-04x86/cpu: Validate CPUID leaf 0x2 EDX outputAhmed S. Darwish
CPUID leaf 0x2 emits one-byte descriptors in its four output registers EAX, EBX, ECX, and EDX. For these descriptors to be valid, the most significant bit (MSB) of each register must be clear. Leaf 0x2 parsing at intel.c only validated the MSBs of EAX, EBX, and ECX, but left EDX unchecked. Validate EDX's most-significant bit as well. Fixes: e0ba94f14f74 ("x86/tlb_info: get last level TLB entry number of CPU") Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20250304085152.51092-3-darwi@linutronix.de
2025-02-27x86/cpu: Remove get_this_hybrid_cpu_*()Pawan Gupta
Because calls to get_this_hybrid_cpu_type() and get_this_hybrid_cpu_native_id() are not required now. cpu-type and native-model-id are cached at boot in per-cpu struct cpuinfo_topology. Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20241211-add-cpu-type-v5-4-2ae010f50370@linux.intel.com
2025-02-10x86: move ZMM exclusion list into CPU feature flagEric Biggers
Lift zmm_exclusion_list in aesni-intel_glue.c into the x86 CPU setup code, and add a new x86 CPU feature flag X86_FEATURE_PREFER_YMM that is set when the CPU is on this list. This allows other code in arch/x86/, such as the CRC library code, to apply the same exclusion list when deciding whether to execute 256-bit or 512-bit optimized functions. Note that full AVX512 support including ZMM registers is still exposed to userspace and is still supported for in-kernel use. This flag just indicates whether in-kernel code should prefer to use YMM registers. Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Keith Busch <kbusch@kernel.org> Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20250210174540.161705-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-01-21Merge tag 'x86_cpu_for_v6.14_rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cpuid updates from Borislav Petkov: - Remove the less generic CPU matching infra around struct x86_cpu_desc and use the generic struct x86_cpu_id thing - Remove magic naked numbers for CPUID functions and use proper defines of the prefix CPUID_LEAF_*. Consolidate some of the crazy use around the tree - Smaller cleanups and improvements * tag 'x86_cpu_for_v6.14_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/cpu: Make all all CPUID leaf names consistent x86/fpu: Remove unnecessary CPUID level check x86/fpu: Move CPUID leaf definitions to common code x86/tsc: Remove CPUID "frequency" leaf magic numbers. x86/tsc: Move away from TSC leaf magic numbers x86/cpu: Move TSC CPUID leaf definition x86/cpu: Refresh DCA leaf reading code x86/cpu: Remove unnecessary MwAIT leaf checks x86/cpu: Use MWAIT leaf definition x86/cpu: Move MWAIT leaf definition to common header x86/cpu: Remove 'x86_cpu_desc' infrastructure x86/cpu: Move AMD erratum 1386 table over to 'x86_cpu_id' x86/cpu: Replace PEBS use of 'x86_cpu_desc' use with 'x86_cpu_id' x86/cpu: Expose only stepping min/max interface x86/cpu: Introduce new microcode matching helper x86/cpufeature: Document cpu_feature_enabled() as the default to use x86/paravirt: Remove the WBINVD callback x86/cpufeatures: Free up unused feature bits
2024-12-06x86/cpufeatures: Free up unused feature bitsSohil Mehta
Linux defined feature bits X86_FEATURE_P3 and X86_FEATURE_P4 are not used anywhere. Commit f31d731e4467 ("x86: use X86_FEATURE_NOPL in alternatives") got rid of the last usage in 2008. Remove the related mappings and code. Just like all X86_FEATURE bits, the raw bit numbers can be exposed to userspace via MODULE_DEVICE_TABLE(). There is a very small theoretical chance of userspace getting confused if these bits got reassigned and changed logical meaning. But these bits were never used for a device table, so it's highly unlikely this will ever happen in practice. [ dhansen: clarify userspace visibility of these bits ] Signed-off-by: Sohil Mehta <sohil.mehta@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/all/20241107233000.2742619-1-sohil.mehta%40intel.com
2024-12-04x86/cpu: Add Lunar Lake to list of CPUs with a broken MONITOR implementationLen Brown
Under some conditions, MONITOR wakeups on Lunar Lake processors can be lost, resulting in significant user-visible delays. Add Lunar Lake to X86_BUG_MONITOR so that wake_up_idle_cpu() always sends an IPI, avoiding this potential delay. Reported originally here: https://bugzilla.kernel.org/show_bug.cgi?id=219364 [ dhansen: tweak subject ] Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc:stable@vger.kernel.org Link: https://lore.kernel.org/all/a4aa8842a3c3bfdb7fe9807710eef159cbf0e705.1731463305.git.len.brown%40intel.com
2024-11-19Merge tag 'x86-splitlock-2024-11-18' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 splitlock updates from Ingo Molnar: - Move Split and Bus lock code to a dedicated file (Ravi Bangoria) - Add split/bus lock support for AMD (Ravi Bangoria) * tag 'x86-splitlock-2024-11-18' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/bus_lock: Add support for AMD x86/split_lock: Move Split and Bus lock code to a dedicated file
2024-10-07x86/cpu/intel: Define helper to get CPU core native IDDapeng Mi
Define helper get_this_hybrid_cpu_native_id() to return the CPU core native ID. This core native ID combining with core type can be used to figure out the CPU core uarch uniquely. Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Tested-by: Yongwei Ma <yongwei.ma@intel.com> Link: https://lkml.kernel.org/r/20240820073853.1974746-3-dapeng1.mi@linux.intel.com
2024-09-03x86/cpu/intel: Replace PAT erratum model/family magic numbers with symbolic ↵Dave Hansen
IFM references There's an erratum that prevents the PAT from working correctly: https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/pentium-dual-core-specification-update.pdf # Document 316515 Version 010 The kernel currently disables PAT support on those CPUs, but it does it with some magic numbers. Replace the magic numbers with the new "IFM" macros. Make the check refer to the last affected CPU (INTEL_CORE_YONAH) rather than the first fixed one. This makes it easier to find the documentation of the erratum since Intel documents where it is broken and not where it is fixed. I don't think the Pentium Pro (or Pentium II) is actually affected. But the old check included them, so it can't hurt to keep doing the same. I'm also not completely sure about the "Pentium M" CPUs (models 0x9 and 0xd). But, again, they were included in in the old checks and were close Pentium III derivatives, so are likely affected. While we're at it, revise the comment referring to the erratum name and making sure it is a quote of the language from the actual errata doc. That should make it easier to find in the future when the URL inevitably changes. Why bother with this in the first place? It actually gets rid of one of the very few remaining direct references to c->x86{,_model}. No change in functionality intended. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Len Brown <len.brown@intel.com> Link: https://lore.kernel.org/r/20240829220042.1007820-1-dave.hansen@linux.intel.com
2024-08-08x86/bus_lock: Add support for AMDRavi Bangoria
Add Bus Lock Detect (called Bus Lock Trap in AMD docs) support for AMD platforms. Bus Lock Detect is enumerated with CPUID Fn0000_0007_ECX_x0 bit [24 / BUSLOCKTRAP]. It can be enabled through MSR_IA32_DEBUGCTLMSR. When enabled, hardware clears DR6[11] and raises a #DB exception on occurrence of Bus Lock if CPL > 0. More detail about the feature can be found in AMD APM[1]. [1]: AMD64 Architecture Programmer's Manual Pub. 40332, Rev. 4.07 - June 2023, Vol 2, 13.1.3.6 Bus Lock Trap https://bugzilla.kernel.org/attachment.cgi?id=304653 Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/all/20240808062937.1149-3-ravi.bangoria@amd.com
2024-08-08x86/split_lock: Move Split and Bus lock code to a dedicated fileRavi Bangoria
Bus Lock Detect functionality on AMD platforms works identical to Intel. Move split_lock and bus_lock specific code from intel.c to a dedicated file so that it can be compiled and supported on non-Intel platforms. Also, introduce CONFIG_X86_BUS_LOCK_DETECT, make it dependent on CONFIG_CPU_SUP_INTEL and add compilation dependency of the new bus_lock.c file on CONFIG_X86_BUS_LOCK_DETECT. Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/all/20240808062937.1149-2-ravi.bangoria@amd.com
2024-07-15Merge tag 'x86_cpu_for_v6.11_rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cpu model updates from Borislav Petkov: - Flip the logic to add feature names to /proc/cpuinfo to having to explicitly specify the flag if there's a valid reason to show it in /proc/cpuinfo - Switch a bunch of Intel x86 model checking code to the new CPU model defines - Fixes and cleanups * tag 'x86_cpu_for_v6.11_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/cpu/intel: Drop stray FAM6 check with new Intel CPU model defines x86/cpufeatures: Flip the /proc/cpuinfo appearance logic x86/CPU/AMD: Always inline amd_clear_divider() x86/mce/inject: Add missing MODULE_DESCRIPTION() line perf/x86/rapl: Switch to new Intel CPU model defines x86/boot: Switch to new Intel CPU model defines x86/cpu: Switch to new Intel CPU model defines perf/x86/intel: Switch to new Intel CPU model defines x86/virt/tdx: Switch to new Intel CPU model defines x86/PCI: Switch to new Intel CPU model defines x86/cpu/intel: Switch to new Intel CPU model defines x86/platform/intel-mid: Switch to new Intel CPU model defines x86/pconfig: Remove unused MKTME pconfig code x86/cpu: Remove useless work in detect_tme_early()
2024-06-29x86/cpu/intel: Drop stray FAM6 check with new Intel CPU model definesAndrew Cooper
The outer if () should have been dropped when switching to c->x86_vfm. Fixes: 6568fc18c2f6 ("x86/cpu/intel: Switch to new Intel CPU model defines") Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/20240529183605.17520-1-andrew.cooper3@citrix.com
2024-05-31x86/topology/intel: Unlock CPUID before evaluating anythingThomas Gleixner
Intel CPUs have a MSR bit to limit CPUID enumeration to leaf two. If this bit is set by the BIOS then CPUID evaluation including topology enumeration does not work correctly as the evaluation code does not try to analyze any leaf greater than two. This went unnoticed before because the original topology code just repeated evaluation several times and managed to overwrite the initial limited information with the correct one later. The new evaluation code does it once and therefore ends up with the limited and wrong information. Cure this by unlocking CPUID right before evaluating anything which depends on the maximum CPUID leaf being greater than two instead of rereading stuff after unlock. Fixes: 22d63660c35e ("x86/cpu: Use common topology code for Intel") Reported-by: Peter Schneider <pschneider1968@googlemail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Peter Schneider <pschneider1968@googlemail.com> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/fd3f73dc-a86f-4bcf-9c60-43556a21eb42@googlemail.com
2024-05-28x86/cpu/intel: Switch to new Intel CPU model definesTony Luck
New CPU #defines encode vendor and family as well as model. Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20240520224620.9480-29-tony.luck%40intel.com
2024-05-28x86/cpu: Remove useless work in detect_tme_early()Alison Schofield
TME (Total Memory Encryption) and MKTME (Multi-Key Total Memory Encryption) BIOS detection were introduced together here [1] and are loosely coupled in the Intel CPU init code. TME is a hardware only feature and its BIOS status is all that needs to be shared with the kernel user: enabled or disabled. The TME algorithm the BIOS is using and whether or not the kernel recognizes that algorithm is useless to the kernel user. MKTME is a hardware feature that requires kernel support. MKTME detection code was added in advance of broader kernel support for MKTME that never followed. So, rather than continuing to spew needless and confusing messages about BIOS MKTME status, remove most of the MKTME pieces from detect_tme_early(). Keep one useful message: alert the user when BIOS enabled MKTME reduces the available physical address bits. Recovery of the MKTME consumed bits requires a reboot with MKTME disabled in BIOS. There is no functional change for the user, only a change in boot messages. Below is one example when both TME and MKTME are enabled in BIOS with AES_XTS_256 which is unknown to the detect tme code. Before: [] x86/tme: enabled by BIOS [] x86/tme: Unknown policy is active: 0x2 [] x86/mktme: No known encryption algorithm is supported: 0x4 [] x86/mktme: enabled by BIOS [] x86/mktme: 127 KeyIDs available After: [] x86/tme: enabled by BIOS [] x86/mktme: BIOS enable: x86_phys_bits reduced by 8 [1] commit cb06d8e3d020 ("x86/tme: Detect if TME and MKTME is activated by BIOS") Signed-off-by: Alison Schofield <alison.schofield@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Link: https://lore.kernel.org/all/86dfdf6ced8c9b790f9376bf6c7e22b5608f47c2.1715054189.git.alison.schofield%40intel.com
2024-03-26x86/cpu: Clear TME feature flag if TME is not enabled by BIOSBingsong Si
When TME is disabled by BIOS, the dmesg output is: x86/tme: not enabled by BIOS ... and TME functionality is not enabled by the kernel, but the TME feature is still shown in /proc/cpuinfo. Clear it. [ mingo: Clarified changelog ] Signed-off-by: Bingsong Si <sibs@chinatelecom.cn> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: "Huang, Kai" <kai.huang@intel.com> Link: https://lore.kernel.org/r/20240311071938.13247-1-sibs@chinatelecom.cn
2024-02-27Merge branch 'x86/urgent' into x86/apic, to resolve conflictsIngo Molnar
Conflicts: arch/x86/kernel/cpu/common.c arch/x86/kernel/cpu/intel.c Signed-off-by: Ingo Molnar <mingo@kernel.org>
2024-02-26x86/cpu/intel: Detect TME keyid bits before setting MTRR mask registersPaolo Bonzini
MKTME repurposes the high bit of physical address to key id for encryption key and, even though MAXPHYADDR in CPUID[0x80000008] remains the same, the valid bits in the MTRR mask register are based on the reduced number of physical address bits. detect_tme() in arch/x86/kernel/cpu/intel.c detects TME and subtracts it from the total usable physical bits, but it is called too late. Move the call to early_init_intel() so that it is called in setup_arch(), before MTRRs are setup. This fixes boot on TDX-enabled systems, which until now only worked with "disable_mtrr_cleanup". Without the patch, the values written to the MTRRs mask registers were 52-bit wide (e.g. 0x000fffff_80000800) and the writes failed; with the patch, the values are 46-bit wide, which matches the reduced MAXPHYADDR that is shown in /proc/cpuinfo. Reported-by: Zixi Chen <zixchen@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc:stable@vger.kernel.org Link: https://lore.kernel.org/all/20240131230902.1867092-3-pbonzini%40redhat.com
2024-02-15x86/cpu: Use common topology code for IntelThomas Gleixner
Intel CPUs use either topology leaf 0xb/0x1f evaluation or the legacy SMP/HT evaluation based on CPUID leaf 0x1/0x4. Move it over to the consolidated topology code and remove the random topology hacks which are sprinkled into the Intel and the common code. No functional change intended. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Juergen Gross <jgross@suse.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Tested-by: Michael Kelley <mhklinux@outlook.com> Tested-by: Zhang Rui <rui.zhang@intel.com> Tested-by: Wang Wendy <wendy.wang@intel.com> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Link: https://lore.kernel.org/r/20240212153624.893644349@linutronix.de
2023-11-01Merge tag 'sysctl-6.7-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux Pull sysctl updates from Luis Chamberlain: "To help make the move of sysctls out of kernel/sysctl.c not incur a size penalty sysctl has been changed to allow us to not require the sentinel, the final empty element on the sysctl array. Joel Granados has been doing all this work. On the v6.6 kernel we got the major infrastructure changes required to support this. For v6.7-rc1 we have all arch/ and drivers/ modified to remove the sentinel. Both arch and driver changes have been on linux-next for a bit less than a month. It is worth re-iterating the value: - this helps reduce the overall build time size of the kernel and run time memory consumed by the kernel by about ~64 bytes per array - the extra 64-byte penalty is no longer inncurred now when we move sysctls out from kernel/sysctl.c to their own files For v6.8-rc1 expect removal of all the sentinels and also then the unneeded check for procname == NULL. The last two patches are fixes recently merged by Krister Johansen which allow us again to use softlockup_panic early on boot. This used to work but the alias work broke it. This is useful for folks who want to detect softlockups super early rather than wait and spend money on cloud solutions with nothing but an eventual hung kernel. Although this hadn't gone through linux-next it's also a stable fix, so we might as well roll through the fixes now" * tag 'sysctl-6.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux: (23 commits) watchdog: move softlockup_panic back to early_param proc: sysctl: prevent aliased sysctls from getting passed to init intel drm: Remove now superfluous sentinel element from ctl_table array Drivers: hv: Remove now superfluous sentinel element from ctl_table array raid: Remove now superfluous sentinel element from ctl_table array fw loader: Remove the now superfluous sentinel element from ctl_table array sgi-xp: Remove the now superfluous sentinel element from ctl_table array vrf: Remove the now superfluous sentinel element from ctl_table array char-misc: Remove the now superfluous sentinel element from ctl_table array infiniband: Remove the now superfluous sentinel element from ctl_table array macintosh: Remove the now superfluous sentinel element from ctl_table array parport: Remove the now superfluous sentinel element from ctl_table array scsi: Remove now superfluous sentinel element from ctl_table array tty: Remove now superfluous sentinel element from ctl_table array xen: Remove now superfluous sentinel element from ctl_table array hpet: Remove now superfluous sentinel element from ctl_table array c-sky: Remove now superfluous sentinel element from ctl_talbe array powerpc: Remove now superfluous sentinel element from ctl_table arrays riscv: Remove now superfluous sentinel element from ctl_table array x86/vdso: Remove now superfluous sentinel element from ctl_table array ...
2023-10-10arch/x86: Remove now superfluous sentinel elem from ctl_table arraysJoel Granados
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) Remove sentinel element from sld_sysctl and itmt_kern_table. This removal is safe because register_sysctl_init and register_sysctl implicitly use the array size in addition to checking for the sentinel. Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> # for x86 Signed-off-by: Joel Granados <j.granados@samsung.com> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2023-10-10x86/cpu: Remove pointless evaluation of x86_coreid_bitsThomas Gleixner
cpuinfo_x86::x86_coreid_bits is only used by the AMD numa topology code. No point in evaluating it on non AMD systems. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Juergen Gross <jgross@suse.com> Tested-by: Sohil Mehta <sohil.mehta@intel.com> Tested-by: Michael Kelley <mikelley@microsoft.com> Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Zhang Rui <rui.zhang@intel.com> Reviewed-by: Arjan van de Ven <arjan@linux.intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20230814085112.687588373@linutronix.de
2023-08-13x86/microcode: Include vendor headers into microcode.hAshok Raj
Currently vendor specific headers are included explicitly when used in common code. Instead, include the vendor specific headers in microcode.h, and include that in all usages. No functional change. Suggested-by: Boris Petkov <bp@alien8.de> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230812195727.776541545@linutronix.de
2023-08-13x86/microcode/intel: Move microcode functions out of cpu/intel.cThomas Gleixner
There is really no point to have that in the CPUID evaluation code. Move it into the Intel-specific microcode handling along with the data structures, defines and helpers required by it. The exports need to stay for IFS. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230812195727.719202319@linutronix.de
2023-04-27Merge tag 'modules-6.4-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux Pull module updates from Luis Chamberlain: "The summary of the changes for this pull requests is: - Song Liu's new struct module_memory replacement - Nick Alcock's MODULE_LICENSE() removal for non-modules - My cleanups and enhancements to reduce the areas where we vmalloc module memory for duplicates, and the respective debug code which proves the remaining vmalloc pressure comes from userspace. Most of the changes have been in linux-next for quite some time except the minor fixes I made to check if a module was already loaded prior to allocating the final module memory with vmalloc and the respective debug code it introduces to help clarify the issue. Although the functional change is small it is rather safe as it can only *help* reduce vmalloc space for duplicates and is confirmed to fix a bootup issue with over 400 CPUs with KASAN enabled. I don't expect stable kernels to pick up that fix as the cleanups would have also had to have been picked up. Folks on larger CPU systems with modules will want to just upgrade if vmalloc space has been an issue on bootup. Given the size of this request, here's some more elaborate details: The functional change change in this pull request is the very first patch from Song Liu which replaces the 'struct module_layout' with a new 'struct module_memory'. The old data structure tried to put together all types of supported module memory types in one data structure, the new one abstracts the differences in memory types in a module to allow each one to provide their own set of details. This paves the way in the future so we can deal with them in a cleaner way. If you look at changes they also provide a nice cleanup of how we handle these different memory areas in a module. This change has been in linux-next since before the merge window opened for v6.3 so to provide more than a full kernel cycle of testing. It's a good thing as quite a bit of fixes have been found for it. Jason Baron then made dynamic debug a first class citizen module user by using module notifier callbacks to allocate / remove module specific dynamic debug information. Nick Alcock has done quite a bit of work cross-tree to remove module license tags from things which cannot possibly be module at my request so to: a) help him with his longer term tooling goals which require a deterministic evaluation if a piece a symbol code could ever be part of a module or not. But quite recently it is has been made clear that tooling is not the only one that would benefit. Disambiguating symbols also helps efforts such as live patching, kprobes and BPF, but for other reasons and R&D on this area is active with no clear solution in sight. b) help us inch closer to the now generally accepted long term goal of automating all the MODULE_LICENSE() tags from SPDX license tags In so far as a) is concerned, although module license tags are a no-op for non-modules, tools which would want create a mapping of possible modules can only rely on the module license tag after the commit 8b41fc4454e ("kbuild: create modules.builtin without Makefile.modbuiltin or tristate.conf"). Nick has been working on this *for years* and AFAICT I was the only one to suggest two alternatives to this approach for tooling. The complexity in one of my suggested approaches lies in that we'd need a possible-obj-m and a could-be-module which would check if the object being built is part of any kconfig build which could ever lead to it being part of a module, and if so define a new define -DPOSSIBLE_MODULE [0]. A more obvious yet theoretical approach I've suggested would be to have a tristate in kconfig imply the same new -DPOSSIBLE_MODULE as well but that means getting kconfig symbol names mapping to modules always, and I don't think that's the case today. I am not aware of Nick or anyone exploring either of these options. Quite recently Josh Poimboeuf has pointed out that live patching, kprobes and BPF would benefit from resolving some part of the disambiguation as well but for other reasons. The function granularity KASLR (fgkaslr) patches were mentioned but Joe Lawrence has clarified this effort has been dropped with no clear solution in sight [1]. In the meantime removing module license tags from code which could never be modules is welcomed for both objectives mentioned above. Some developers have also welcomed these changes as it has helped clarify when a module was never possible and they forgot to clean this up, and so you'll see quite a bit of Nick's patches in other pull requests for this merge window. I just picked up the stragglers after rc3. LWN has good coverage on the motivation behind this work [2] and the typical cross-tree issues he ran into along the way. The only concrete blocker issue he ran into was that we should not remove the MODULE_LICENSE() tags from files which have no SPDX tags yet, even if they can never be modules. Nick ended up giving up on his efforts due to having to do this vetting and backlash he ran into from folks who really did *not understand* the core of the issue nor were providing any alternative / guidance. I've gone through his changes and dropped the patches which dropped the module license tags where an SPDX license tag was missing, it only consisted of 11 drivers. To see if a pull request deals with a file which lacks SPDX tags you can just use: ./scripts/spdxcheck.py -f \ $(git diff --name-only commid-id | xargs echo) You'll see a core module file in this pull request for the above, but that's not related to his changes. WE just need to add the SPDX license tag for the kernel/module/kmod.c file in the future but it demonstrates the effectiveness of the script. Most of Nick's changes were spread out through different trees, and I just picked up the slack after rc3 for the last kernel was out. Those changes have been in linux-next for over two weeks. The cleanups, debug code I added and final fix I added for modules were motivated by David Hildenbrand's report of boot failing on a systems with over 400 CPUs when KASAN was enabled due to running out of virtual memory space. Although the functional change only consists of 3 lines in the patch "module: avoid allocation if module is already present and ready", proving that this was the best we can do on the modules side took quite a bit of effort and new debug code. The initial cleanups I did on the modules side of things has been in linux-next since around rc3 of the last kernel, the actual final fix for and debug code however have only been in linux-next for about a week or so but I think it is worth getting that code in for this merge window as it does help fix / prove / evaluate the issues reported with larger number of CPUs. Userspace is not yet fixed as it is taking a bit of time for folks to understand the crux of the issue and find a proper resolution. Worst come to worst, I have a kludge-of-concept [3] of how to make kernel_read*() calls for modules unique / converge them, but I'm currently inclined to just see if userspace can fix this instead" Link: https://lore.kernel.org/all/Y/kXDqW+7d71C4wz@bombadil.infradead.org/ [0] Link: https://lkml.kernel.org/r/025f2151-ce7c-5630-9b90-98742c97ac65@redhat.com [1] Link: https://lwn.net/Articles/927569/ [2] Link: https://lkml.kernel.org/r/20230414052840.1994456-3-mcgrof@kernel.org [3] * tag 'modules-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux: (121 commits) module: add debugging auto-load duplicate module support module: stats: fix invalid_mod_bytes typo module: remove use of uninitialized variable len module: fix building stats for 32-bit targets module: stats: include uapi/linux/module.h module: avoid allocation if module is already present and ready module: add debug stats to help identify memory pressure module: extract patient module check into helper modules/kmod: replace implementation with a semaphore Change DEFINE_SEMAPHORE() to take a number argument module: fix kmemleak annotations for non init ELF sections module: Ignore L0 and rename is_arm_mapping_symbol() module: Move is_arm_mapping_symbol() to module_symbol.h module: Sync code of is_arm_mapping_symbol() scripts/gdb: use mem instead of core_layout to get the module address interconnect: remove module-related code interconnect: remove MODULE_LICENSE in non-modules zswap: remove MODULE_LICENSE in non-modules zpool: remove MODULE_LICENSE in non-modules x86/mm/dump_pagetables: remove MODULE_LICENSE in non-modules ...
2023-04-18Change DEFINE_SEMAPHORE() to take a number argumentPeter Zijlstra
Fundamentally semaphores are a counted primitive, but DEFINE_SEMAPHORE() does not expose this and explicitly creates a binary semaphore. Change DEFINE_SEMAPHORE() to take a number argument and use that in the few places that open-coded it using __SEMAPHORE_INITIALIZER(). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> [mcgrof: add some tribal knowledge about why some folks prefer binary sempahores over mutexes] Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2023-03-16x86/split_lock: Enumerate architectural split lock disable bitFenghua Yu
The December 2022 edition of the Intel Instruction Set Extensions manual defined that the split lock disable bit in the IA32_CORE_CAPABILITIES MSR is (and retrospectively always has been) architectural. Remove all the model specific checks except for Ice Lake variants which are still needed because these CPU models do not enumerate presence of the IA32_CORE_CAPABILITIES MSR. Originally-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/lkml/20220701131958.687066-1-fenghua.yu@intel.com/t/#mada243bee0915532a6adef6a9e32d244d1a9aef4
2022-12-13Merge tag 'x86_microcode_for_v6.2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 microcode and IFS updates from Borislav Petkov: "The IFS (In-Field Scan) stuff goes through tip because the IFS driver uses the same structures and similar functionality as the microcode loader and it made sense to route it all through this branch so that there are no conflicts. - Add support for multiple testing sequences to the Intel In-Field Scan driver in order to be able to run multiple different test patterns. Rework things and remove the BROKEN dependency so that the driver can be enabled (Jithu Joseph) - Remove the subsys interface usage in the microcode loader because it is not really needed - A couple of smaller fixes and cleanups" * tag 'x86_microcode_for_v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits) x86/microcode/intel: Do not retry microcode reloading on the APs x86/microcode/intel: Do not print microcode revision and processor flags platform/x86/intel/ifs: Add missing kernel-doc entry Revert "platform/x86/intel/ifs: Mark as BROKEN" Documentation/ABI: Update IFS ABI doc platform/x86/intel/ifs: Add current_batch sysfs entry platform/x86/intel/ifs: Remove reload sysfs entry platform/x86/intel/ifs: Add metadata validation platform/x86/intel/ifs: Use generic microcode headers and functions platform/x86/intel/ifs: Add metadata support x86/microcode/intel: Use a reserved field for metasize x86/microcode/intel: Add hdr_type to intel_microcode_sanity_check() x86/microcode/intel: Reuse microcode_sanity_check() x86/microcode/intel: Use appropriate type in microcode_sanity_check() x86/microcode/intel: Reuse find_matching_signature() platform/x86/intel/ifs: Remove memory allocation from load path platform/x86/intel/ifs: Remove image loading during init platform/x86/intel/ifs: Return a more appropriate error code platform/x86/intel/ifs: Remove unused selection x86/microcode: Drop struct ucode_cpu_info.valid ...
2022-11-18x86/microcode/intel: Add hdr_type to intel_microcode_sanity_check()Jithu Joseph
IFS test images and microcode blobs use the same header format. Microcode blobs use header type of 1, whereas IFS test images will use header type of 2. In preparation for IFS reusing intel_microcode_sanity_check(), add header type as a parameter for sanity check. [ bp: Touchups. ] Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Link: https://lore.kernel.org/r/20221117035935.4136738-9-jithu.joseph@intel.com
2022-11-18x86/microcode/intel: Reuse microcode_sanity_check()Jithu Joseph
IFS test image carries the same microcode header as regular Intel microcode blobs. Reuse microcode_sanity_check() in the IFS driver to perform sanity check of the IFS test images too. Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20221117035935.4136738-8-jithu.joseph@intel.com
2022-11-18x86/microcode/intel: Reuse find_matching_signature()Jithu Joseph
IFS uses test images provided by Intel that can be regarded as firmware. An IFS test image carries microcode header with an extended signature table. Reuse find_matching_signature() for verifying if the test image header or the extended signature table indicate whether that image is fit to run on a system. No functional changes. Signed-off-by: Jithu Joseph <jithu.joseph@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20221117035935.4136738-6-jithu.joseph@intel.com
2022-11-10x86/split_lock: Add sysctl to control the misery modeGuilherme G. Piccoli
Commit b041b525dab9 ("x86/split_lock: Make life miserable for split lockers") changed the way the split lock detector works when in "warn" mode; basically, it not only shows the warn message, but also intentionally introduces a slowdown through sleeping plus serialization mechanism on such task. Based on discussions in [0], seems the warning alone wasn't enough motivation for userspace developers to fix their applications. This slowdown is enough to totally break some proprietary (aka. unfixable) userspace[1]. Happens that originally the proposal in [0] was to add a new mode which would warns + slowdown the "split locking" task, keeping the old warn mode untouched. In the end, that idea was discarded and the regular/default "warn" mode now slows down the applications. This is quite aggressive with regards proprietary/legacy programs that basically are unable to properly run in kernel with this change. While it is understandable that a malicious application could DoS by split locking, it seems unacceptable to regress old/proprietary userspace programs through a default configuration that previously worked. An example of such breakage was reported in [1]. Add a sysctl to allow controlling the "misery mode" behavior, as per Thomas suggestion on [2]. This way, users running legacy and/or proprietary software are allowed to still execute them with a decent performance while still observing the warning messages on kernel log. [0] https://lore.kernel.org/lkml/20220217012721.9694-1-tony.luck@intel.com/ [1] https://github.com/doitsujin/dxvk/issues/2938 [2] https://lore.kernel.org/lkml/87pmf4bter.ffs@tglx/ [ dhansen: minor changelog tweaks, including clarifying the actual problem ] Fixes: b041b525dab9 ("x86/split_lock: Make life miserable for split lockers") Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Tony Luck <tony.luck@intel.com> Tested-by: Andre Almeida <andrealmeid@igalia.com> Link: https://lore.kernel.org/all/20221024200254.635256-1-gpiccoli%40igalia.com
2022-11-02x86/microcode: Drop struct ucode_cpu_info.validBorislav Petkov
It is not needed anymore. Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Link: https://lore.kernel.org/r/20221028142638.28498-6-bp@alien8.de
2022-08-02x86/bus_lock: Don't assume the init value of DEBUGCTLMSR.BUS_LOCK_DETECT to ↵Chenyi Qiang
be zero It's possible that this kernel has been kexec'd from a kernel that enabled bus lock detection, or (hypothetically) BIOS/firmware has set DEBUGCTLMSR_BUS_LOCK_DETECT. Disable bus lock detection explicitly if not wanted. Fixes: ebb1064e7c2e ("x86/traps: Handle #DB for bus lock") Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/20220802033206.21333-1-chenyi.qiang@intel.com
2022-08-01Merge tag 'x86_cpu_for_v6.0_rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cpu updates from Borislav Petkov: - Remove the vendor check when selecting MWAIT as the default idle state - Respect idle=nomwait when supplied on the kernel cmdline - Two small cleanups * tag 'x86_cpu_for_v6.0_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/cpu: Use MSR_IA32_MISC_ENABLE constants x86: Fix comment for X86_FEATURE_ZEN x86: Remove vendor checks from prefer_mwait_c1_over_halt x86: Handle idle=nomwait cmdline properly for x86_idle
2022-07-19x86/cpu: Use MSR_IA32_MISC_ENABLE constantsPaolo Bonzini
Instead of the magic numbers 1<<11 and 1<<12 use the constants from msr-index.h. This makes it obvious where those bits of MSR_IA32_MISC_ENABLE are consumed (and in fact that Linux consumes them at all) to simple minds that grep for MSR_IA32_MISC_ENABLE_.*_UNAVAIL. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220719174714.2410374-1-pbonzini@redhat.com