summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-05-04x86/boot: Add a bunch of PIC aliasesArd Biesheuvel
Add aliases for all the data objects that the startup code references - this is needed so that this code can be moved into its own confined area where it can only access symbols that have a __pi_ prefix. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Len Brown <len.brown@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250504095230.2932860-39-ardb+git@google.com
2025-05-04x86/linkage: Add SYM_PIC_ALIAS() macro helper to emit symbol aliasesArd Biesheuvel
Startup code that may execute from the early 1:1 mapping of memory will be confined into its own address space, and only be permitted to access ordinary kernel symbols if this is known to be safe. Introduce a macro helper SYM_PIC_ALIAS() that emits a __pi_ prefixed alias for a symbol, which allows startup code to access it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Len Brown <len.brown@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250504095230.2932860-38-ardb+git@google.com
2025-05-04x86/sev: Move instruction decoder into separate source fileArd Biesheuvel
As a first step towards disentangling the SEV #VC handling code -which is shared between the decompressor and the core kernel- from the SEV startup code, move the decompressor's copy of the instruction decoder into a separate source file. Code movement only - no functional change intended. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Len Brown <len.brown@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250504095230.2932860-30-ardb+git@google.com
2025-05-04x86/sev: Make sev_snp_enabled() a static functionArd Biesheuvel
sev_snp_enabled() is no longer used outside of the source file that defines it, so make it static and drop the extern declarations. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Len Brown <len.brown@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250504095230.2932860-29-ardb+git@google.com
2025-05-04x86/boot: Disregard __supported_pte_mask in __startup_64()Ard Biesheuvel
__supported_pte_mask is statically initialized to U64_MAX and never assigned until long after the startup code executes that creates the initial page tables. So applying the mask is unnecessary, and can be avoided. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Len Brown <len.brown@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250504095230.2932860-27-ardb+git@google.com
2025-05-04x86/boot: Move early_setup_gdt() back into head64.cArd Biesheuvel
Move early_setup_gdt() out of the startup code that is callable from the 1:1 mapping - this is not needed, and instead, it is better to expose the helper that does reside in __head directly. This reduces the amount of code that needs special checks for 1:1 execution suitability. In particular, it avoids dealing with the GHCB page (and its physical address) in startup code, which runs from the 1:1 mapping, making physical to virtual translations ambiguous. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Len Brown <len.brown@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250504095230.2932860-26-ardb+git@google.com
2025-05-04Merge branch 'x86/urgent' into x86/boot, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-05-04x86/fpu: Shift fpregs_assert_state_consistent() from arch_exit_work() to its ↵Oleg Nesterov
caller If CONFIG_X86_DEBUG_FPU=Y, arch_exit_to_user_mode_prepare() calls arch_exit_work() even if ti_work == 0. There only reason is that we want to call fpregs_assert_state_consistent() if TIF_NEED_FPU_LOAD is not set. This looks confusing. arch_exit_to_user_mode_prepare() can just call fpregs_assert_state_consistent() unconditionally, it depends on CONFIG_X86_DEBUG_FPU and checks TIF_NEED_FPU_LOAD itself. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Chang S . Bae <chang.seok.bae@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250503143902.GA9012@redhat.com
2025-05-04x86/fpu: Check TIF_NEED_FPU_LOAD instead of PF_KTHREAD|PF_USER_WORKER in ↵Oleg Nesterov
fpu__drop() PF_KTHREAD|PF_USER_WORKER tasks should never clear TIF_NEED_FPU_LOAD, so the TIF_NEED_FPU_LOAD check should equally filter them out. And this way an exiting userspace task can avoid the unnecessary "fwait" if it does context_switch() at least once on its way to exit_thread(). Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Chang S . Bae <chang.seok.bae@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250503143856.GA9009@redhat.com
2025-05-04x86/fpu: Always use memcpy_and_pad() in arch_dup_task_struct()Oleg Nesterov
It makes no sense to copy the bytes after sizeof(struct task_struct), FPU state will be initialized in fpu_clone(). A plain memcpy(dst, src, sizeof(struct task_struct)) should work too, but "_and_pad" looks safer. [ mingo: Simplify it a bit more. ] Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: Chang S . Bae <chang.seok.bae@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250503143850.GA8997@redhat.com
2025-05-04x86/fpu: Remove DEFINE_EVENT(x86_fpu, x86_fpu_copy_src)Oleg Nesterov
trace_x86_fpu_copy_src() has no users after: 22aafe3bcb67 ("x86/fpu: Remove init_task FPU state dependencies, add debugging warning for PF_KTHREAD tasks") Remove the event. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Chang S . Bae <chang.seok.bae@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250503143843.GA8989@redhat.com
2025-05-04x86/fpu: Remove x86_init_fpuOleg Nesterov
It is not actually used after: 55bc30f2e34d ("x86/fpu: Remove the thread::fpu pointer") Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Chang S . Bae <chang.seok.bae@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250503143837.GA8985@redhat.com
2025-05-04x86/fpu: Simplify the switch_fpu_prepare() + switch_fpu_finish() logicOleg Nesterov
Now that switch_fpu_finish() doesn't load the FPU state, it makes more sense to fold it into switch_fpu_prepare() renamed to switch_fpu(), and more importantly, use the "prev_p" task as a target for TIF_NEED_FPU_LOAD. It doesn't make any sense to delay set_tsk_thread_flag(TIF_NEED_FPU_LOAD) until "prev_p" is scheduled again. There is no worry about the very first context switch, fpu_clone() must always set TIF_NEED_FPU_LOAD. Also, shift the test_tsk_thread_flag(TIF_NEED_FPU_LOAD) from the callers to switch_fpu(). Note that the "PF_KTHREAD | PF_USER_WORKER" check can be removed but this deserves a separate patch which can change more functions, say, kernel_fpu_begin_mask(). Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Chang S . Bae <chang.seok.bae@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Brian Gerst <brgerst@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20250503143830.GA8982@redhat.com
2025-05-04Merge tag 'v6.15-rc4' into x86/fpu, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-05-04x86/boot/sev: Support memory acceptance in the EFI stub under SVSMArd Biesheuvel
Commit: d54d610243a4 ("x86/boot/sev: Avoid shared GHCB page for early memory acceptance") provided a fix for SEV-SNP memory acceptance from the EFI stub when running at VMPL #0. However, that fix was insufficient for SVSM SEV-SNP guests running at VMPL >0, as those rely on a SVSM calling area, which is a shared buffer whose address is programmed into a SEV-SNP MSR, and the SEV init code that sets up this calling area executes much later during the boot. Given that booting via the EFI stub at VMPL >0 implies that the firmware has configured this calling area already, reuse it for performing memory acceptance in the EFI stub. Fixes: fcd042e86422 ("x86/sev: Perform PVALIDATE using the SVSM when not at VMPL0") Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Co-developed-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: <stable@vger.kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: linux-efi@vger.kernel.org Link: https://lore.kernel.org/r/20250428174322.2780170-2-ardb+git@google.com
2025-05-04powerpc/pseries/htmdump: Add documentation for H_HTM debugfs interfaceAthira Rajeev
Documentation for HTM (Hardware Trace Macro) debugfs interface and how it can be used to configure/control the HTM operations. Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250420180844.53128-10-atrajeev@linux.ibm.com
2025-05-04powerpc/pseries/htmdump: Add htm capabilities support to htmdump moduleAthira Rajeev
Support dumping HTM capabilities information from Hardware Trace Macro (HTM) function via debugfs interface. Under debugfs folder "/sys/kernel/debug/powerpc/htmdump", add file "htmcaps". The interface allows only read of this file which will present the content of HTM buffer from the hcall. Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250420180844.53128-9-atrajeev@linux.ibm.com
2025-05-04powerpc/pseries/htmdump: Add htm flags support to htmdump moduleAthira Rajeev
Under debugfs folder, "/sys/kernel/debug/powerpc/htmdump", add file "htmflags". Currently supported flag value is to enable/disable HTM buffer wrap. wrap is used along with "configure" to prevent HTM buffer from wrapping. Writing 1 will set noWrap while configuring HTM Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250420180844.53128-8-atrajeev@linux.ibm.com
2025-05-04powerpc/pseries/htmdump: Add htm setup support to htmdump moduleAthira Rajeev
Add htm setup support to htmdump module. To use the HTM (Hardware Trace Macro), HTM buffer has to be allocated. Support setup of HTM buffers via debugfs interface. Under debugfs folder, "/sys/kernel/debug/powerpc/htmdump", add file "htmsetup". The interface allows setup of HTM buffer by writing size of HTM buffer in power of 2 to the "htmsetup" file Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250420180844.53128-7-atrajeev@linux.ibm.com
2025-05-04powerpc/pseries/htmdump: Add htm info support to htmdump moduleAthira Rajeev
Support dumping system processor configuration from Hardware Trace Macro (HTM) function via debugfs interface. Under debugfs folder "/sys/kernel/debug/powerpc/htmdump", add file "htminfo". The interface allows only read of this file which will present the content of HTM buffer from the hcall. The 16th offset of HTM buffer has value for the number of entries for array of processors. Use this information to copy data to the debugfs file Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250420180844.53128-6-atrajeev@linux.ibm.com
2025-05-04powerpc/pseries/htmdump: Add htm status support to htmdump moduleAthira Rajeev
Support dumping status of Hardware Trace Macro (HTM) function via debugfs interface. Under debugfs folder "/sys/kernel/debug/powerpc/htmdump", add file "htmstatus". The interface allows only read of this file which will present the content of HTM status buffer from the hcall. The 16th offset of HTM status buffer has value for the number of HTM entries in the status buffer. Each nest htm status entry is 0x6 bytes, where as core HTM status entry is 0x8 bytes. Calculate the number of bytes to read based on this detail. Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250420180844.53128-5-atrajeev@linux.ibm.com
2025-05-04powerpc/pseries/htmdump: Add htm start support to htmdump moduleAthira Rajeev
Support starting of Hardware Trace Macro (HTM) function via debugfs interface. Under debugfs folder "/sys/kernel/debug/powerpc/htmdump", add file "htmstart". The interface allows starting of htm via this file by writing value "1". Also allows stopping of htm tracing by writing value "0" to this file. Any other value returns -EINVAL. Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250420180844.53128-4-atrajeev@linux.ibm.com
2025-05-04powerpc/pseries/htmdump: Add htm configure support to htmdump moduleAthira Rajeev
Support configuring of Hardware Trace Macro (HTM) function via debugfs interface. Under debugfs folder "/sys/kernel/debug/powerpc/htmdump", add file "htmconfigure". The interface allows configuring of htm via this file by writing value "1". Allow deconfiguring of htm via this file by writing value "0". Any other value returns -EINVAL. Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250420180844.53128-3-atrajeev@linux.ibm.com
2025-05-04powerpc/pseries/htmdump: Add htm_hcall_wrapper to integrate other htm operationsAthira Rajeev
H_HTM (Hardware Trace Macro) hypervisor call is an HCALL to export data from Hardware Trace Macro (HTM) function. The debugfs interface to export the HTM function data in an lpar currently supports only dumping of HTM data in an lpar. To add support for setup, configuration and control of HTM function via debugfs interface, update the hcall wrapper function. Rename and update htm_get_dump_hardware to htm_hcall_wrapper() so that it can be used for other HTM operations as well. Additionally include parameter "htm_op". Update htmdump module to check the return code of hcall in a separate function so that it can be reused for other option too. Add check to disable the interface in guest environment. Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250420180844.53128-2-atrajeev@linux.ibm.com
2025-05-04powerpc: 8xx/gpio: use new line value setter callbacksBartosz Golaszewski
struct gpio_chip now has callbacks for setting line values that return an integer, allowing to indicate failures. Convert the driver to using them. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Christophe Leroy <christophe.leroy@csgroup.eu> # powerpc 8xx Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250502-gpiochip-set-rv-powerpc-v2-5-488e43e325bf@linaro.org
2025-05-04powerpc: 52xx/gpio: use new line value setter callbacksBartosz Golaszewski
struct gpio_chip now has callbacks for setting line values that return an integer, allowing to indicate failures. Convert the driver to using them. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250502-gpiochip-set-rv-powerpc-v2-4-488e43e325bf@linaro.org
2025-05-04powerpc: 44x/gpio: use new line value setter callbacksBartosz Golaszewski
struct gpio_chip now has callbacks for setting line values that return an integer, allowing to indicate failures. Convert the driver to using them. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250502-gpiochip-set-rv-powerpc-v2-3-488e43e325bf@linaro.org
2025-05-04powerpc: 83xx/gpio: use new line value setter callbacksBartosz Golaszewski
struct gpio_chip now has callbacks for setting line values that return an integer, allowing to indicate failures. Convert the driver to using them. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250502-gpiochip-set-rv-powerpc-v2-2-488e43e325bf@linaro.org
2025-05-04powerpc: sysdev/gpio: use new line value setter callbacksBartosz Golaszewski
struct gpio_chip now has callbacks for setting line values that return an integer, allowing to indicate failures. Convert the driver to using them. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250502-gpiochip-set-rv-powerpc-v2-1-488e43e325bf@linaro.org
2025-05-03Merge tag 'arm64-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fix from Catalin Marinas: "Add missing sentinels to the arm64 Spectre-BHB MIDR arrays, otherwise is_midr_in_range_list() reads beyond the end of these arrays" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: errata: Add missing sentinels to Spectre-BHB MIDR arrays
2025-05-03Merge tag 'i2c-for-6.15-rc5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux Pull i2c fix from Wolfram Sang: - imx-lpi2c: fix clock error handling sequence in probe * tag 'i2c-for-6.15-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux: i2c: imx-lpi2c: Fix clock count when probe defers
2025-05-03Merge tag 'sound-6.15-rc5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound Pull sound fixes from Takashi Iwai: "A bunch of small fixes. Mostly driver specific. - An OOB access fix in core UMP rawmidi conversion code - Fix for ASoC DAPM hw_params widget sequence - Make retry of usb_set_interface() errors for flaky devices - Fix redundant USB MIDI name strings - Quirks for various HP and ASUS models with HD-audio, and Jabra Evolve 65 USB-audio - Cirrus Kunit test fixes - Various fixes for ASoC Intel, stm32, renesas, imx-card, and simple-card" * tag 'sound-6.15-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: (30 commits) ASoC: amd: ps: fix for irq handler return status ASoC: simple-card-utils: Fix pointer check in graph_util_parse_link_direction ASoC: intel/sdw_utils: Add volume limit to cs35l56 speakers ASoC: intel/sdw_utils: Add volume limit to cs42l43 speakers ASoC: stm32: sai: add a check on minimal kernel frequency ASoC: stm32: sai: skip useless iterations on kernel rate loop ALSA: hda/realtek - Add more HP laptops which need mute led fixup ALSA: hda/realtek: Fix built-mic regression on other ASUS models ASoC: Intel: catpt: avoid type mismatch in dev_dbg() format ALSA: usb-audio: Fix duplicated name in MIDI substream names ALSA: ump: Fix buffer overflow at UMP SysEx message conversion ALSA: usb-audio: Add second USB ID for Jabra Evolve 65 headset ALSA: hda/realtek: Add quirk for HP Spectre x360 15-df1xxx ALSA: hda: Apply volume control on speaker+lineout for HP EliteStudio AIO ASoC: Intel: bytcr_rt5640: Add DMI quirk for Acer Aspire SW3-013 ASoC: amd: acp: Fix devm_snd_soc_register_card(acp-pdm-mach) failure ASoC: amd: acp: Fix NULL pointer deref in acp_i2s_set_tdm_slot ASoC: amd: acp: Fix NULL pointer deref on acp resume path ASoC: renesas: rz-ssi: Use NOIRQ_SYSTEM_SLEEP_PM_OPS() ASoC: soc-acpi-intel-ptl-match: add empty item to ptl_cs42l43_l3[] ...
2025-05-03futex,selftests: Add another FUTEX2_NUMA selftestPeter Zijlstra
Implement a simple NUMA aware spinlock for testing and howto purposes. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2025-05-03selftests/futex: Add futex_numa_mpolSebastian Andrzej Siewior
Test the basic functionality for the NUMA and MPOL flags: - FUTEX2_NUMA should take the NUMA node which is after the uaddr and use it. - Only update the node if FUTEX_NO_NODE was set by the user - FUTEX2_MPOL should use the memory based on the policy. I attempted to set the node with mbind() and then use this with MPOL but this fails and futex falls back to the default node for the current CPU. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-22-bigeasy@linutronix.de
2025-05-03selftests/futex: Add futex_priv_hashSebastian Andrzej Siewior
Test the basic functionality of the private hash: - Upon start, with no threads there is no private hash. - The first thread initializes the private hash. - More than four threads will increase the size of the private hash if the system has more than 16 CPUs online. - Once the user sets the size of private hash, auto scaling is disabled. - The user is only allowed to use numbers to the power of two. - The user may request the global or make the hash immutable. - Once the global hash has been set or the hash has been made immutable, further changes are not allowed. - Futex operations should work the whole time. It must be possible to hold a lock, such a PI initialised mutex, during the resize operation. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-21-bigeasy@linutronix.de
2025-05-03selftests/futex: Build without headers nonsensePeter Zijlstra
Make it build without relying on recent headers. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2025-05-03tools/perf: Allow to select the number of hash bucketsSebastian Andrzej Siewior
Add the -b/ --buckets argument to specify the number of hash buckets for the private futex hash. This is directly passed to prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS, buckets, immutable) and must return without an error if specified. The `immutable' is 0 by default and can be set to 1 via the -I/ --immutable argument. The size of the private hash is verified with PR_FUTEX_HASH_GET_SLOTS. If PR_FUTEX_HASH_GET_SLOTS failed then it is assumed that an older kernel was used without the support and that the global hash is used. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-20-bigeasy@linutronix.de
2025-05-03tools headers: Synchronize prctl.h ABI headerSebastian Andrzej Siewior
Synchronize prctl.h with current uapi version after adding PR_FUTEX_HASH. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-19-bigeasy@linutronix.de
2025-05-03futex: Implement FUTEX2_MPOLPeter Zijlstra
Extend the futex2 interface to be aware of mempolicy. When FUTEX2_MPOL is specified and there is a MPOL_PREFERRED or home_node specified covering the futex address, use that hash-map. Notably, in this case the futex will go to the global node hashtable, even if it is a PRIVATE futex. When FUTEX2_NUMA|FUTEX2_MPOL is specified and the user specified node value is FUTEX_NO_NODE, the MPOL lookup (as described above) will be tried first before reverting to setting node to the local node. [bigeasy: add CONFIG_FUTEX_MPOL, add MPOL to FUTEX2_VALID_MASK, write the node only to user if FUTEX_NO_NODE was supplied] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-18-bigeasy@linutronix.de
2025-05-03futex: Implement FUTEX2_NUMAPeter Zijlstra
Extend the futex2 interface to be numa aware. When FUTEX2_NUMA is specified for a futex, the user value is extended to two words (of the same size). The first is the user value we all know, the second one will be the node to place this futex on. struct futex_numa_32 { u32 val; u32 node; }; When node is set to ~0, WAIT will set it to the current node_id such that WAKE knows where to find it. If userspace corrupts the node value between WAIT and WAKE, the futex will not be found and no wakeup will happen. When FUTEX2_NUMA is not set, the node is simply an extension of the hash, such that traditional futexes are still interleaved over the nodes. This is done to avoid having to have a separate !numa hash-table. [bigeasy: ensure to have at least hashsize of 4 in futex_init(), add pr_info() for size and allocation information. Cast the naddr math to void*] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-17-bigeasy@linutronix.de
2025-05-03futex: Allow to make the private hash immutableSebastian Andrzej Siewior
My initial testing showed that: perf bench futex hash reported less operations/sec with private hash. After using the same amount of buckets in the private hash as used by the global hash then the operations/sec were about the same. This changed once the private hash became resizable. This feature added an RCU section and reference counting via atomic inc+dec operation into the hot path. The reference counting can be avoided if the private hash is made immutable. Extend PR_FUTEX_HASH_SET_SLOTS by a fourth argument which denotes if the private should be made immutable. Once set (to true) the a further resize is not allowed (same if set to global hash). Add PR_FUTEX_HASH_GET_IMMUTABLE which returns true if the hash can not be changed. Update "perf bench" suite. For comparison, results of "perf bench futex hash -s": - Xeon CPU E5-2650, 2 NUMA nodes, total 32 CPUs: - Before the introducing task local hash shared Averaged 1.487.148 operations/sec (+- 0,53%), total secs = 10 private Averaged 2.192.405 operations/sec (+- 0,07%), total secs = 10 - With the series shared Averaged 1.326.342 operations/sec (+- 0,41%), total secs = 10 -b128 Averaged 141.394 operations/sec (+- 1,15%), total secs = 10 -Ib128 Averaged 851.490 operations/sec (+- 0,67%), total secs = 10 -b8192 Averaged 131.321 operations/sec (+- 2,13%), total secs = 10 -Ib8192 Averaged 1.923.077 operations/sec (+- 0,61%), total secs = 10 128 is the default allocation of hash buckets. 8192 was the previous amount of allocated hash buckets. - Xeon(R) CPU E7-8890 v3, 4 NUMA nodes, total 144 CPUs: - Before the introducing task local hash shared Averaged 1.810.936 operations/sec (+- 0,26%), total secs = 20 private Averaged 2.505.801 operations/sec (+- 0,05%), total secs = 20 - With the series shared Averaged 1.589.002 operations/sec (+- 0,25%), total secs = 20 -b1024 Averaged 42.410 operations/sec (+- 0,20%), total secs = 20 -Ib1024 Averaged 740.638 operations/sec (+- 1,51%), total secs = 20 -b65536 Averaged 48.811 operations/sec (+- 1,35%), total secs = 20 -Ib65536 Averaged 1.963.165 operations/sec (+- 0,18%), total secs = 20 1024 is the default allocation of hash buckets. 65536 was the previous amount of allocated hash buckets. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Shrikanth Hegde <sshegde@linux.ibm.com> Link: https://lore.kernel.org/r/20250416162921.513656-16-bigeasy@linutronix.de
2025-05-03futex: Allow to resize the private local hashSebastian Andrzej Siewior
The mm_struct::futex_hash_lock guards the futex_hash_bucket assignment/ replacement. The futex_hash_allocate()/ PR_FUTEX_HASH_SET_SLOTS operation can now be invoked at runtime and resize an already existing internal private futex_hash_bucket to another size. The reallocation is based on an idea by Thomas Gleixner: The initial allocation of struct futex_private_hash sets the reference count to one. Every user acquires a reference on the local hash before using it and drops it after it enqueued itself on the hash bucket. There is no reference held while the task is scheduled out while waiting for the wake up. The resize process allocates a new struct futex_private_hash and drops the initial reference. Synchronized with mm_struct::futex_hash_lock it is checked if the reference counter for the currently used mm_struct::futex_phash is marked as DEAD. If so, then all users enqueued on the current private hash are requeued on the new private hash and the new private hash is set to mm_struct::futex_phash. Otherwise the newly allocated private hash is saved as mm_struct::futex_phash_new and the rehashing and reassigning is delayed to the futex_hash() caller once the reference counter is marked DEAD. The replacement is not performed at rcuref_put() time because certain callers, such as futex_wait_queue(), drop their reference after changing the task state. This change will be destroyed once the futex_hash_lock is acquired. The user can change the number slots with PR_FUTEX_HASH_SET_SLOTS multiple times. An increase and decrease is allowed and request blocks until the assignment is done. The private hash allocated at thread creation is changed from 16 to 16 <= 4 * number_of_threads <= global_hash_size where number_of_threads can not exceed the number of online CPUs. Should the user PR_FUTEX_HASH_SET_SLOTS then the auto scaling is disabled. [peterz: reorganize the code to avoid state tracking and simplify new object handling, block the user until changes are in effect, allow increase and decrease of the hash]. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-15-bigeasy@linutronix.de
2025-05-03futex: Allow automatic allocation of process wide futex hashSebastian Andrzej Siewior
Allocate a private futex hash with 16 slots if a task forks its first thread. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-14-bigeasy@linutronix.de
2025-05-03futex: Add basic infrastructure for local task local hashSebastian Andrzej Siewior
The futex hash is system wide and shared by all tasks. Each slot is hashed based on futex address and the VMA of the thread. Due to randomized VMAs (and memory allocations) the same logical lock (pointer) can end up in a different hash bucket on each invocation of the application. This in turn means that different applications may share a hash bucket on the first invocation but not on the second and it is not always clear which applications will be involved. This can result in high latency's to acquire the futex_hash_bucket::lock especially if the lock owner is limited to a CPU and can not be effectively PI boosted. Introduce basic infrastructure for process local hash which is shared by all threads of process. This hash will only be used for a PROCESS_PRIVATE FUTEX operation. The hashmap can be allocated via: prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS, num); A `num' of 0 means that the global hash is used instead of a private hash. Other values for `num' specify the number of slots for the hash and the number must be power of two, starting with two. The prctl() returns zero on success. This function can only be used before a thread is created. The current status for the private hash can be queried via: num = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_GET_SLOTS); which return the current number of slots. The value 0 means that the global hash is used. Values greater than 0 indicate the number of slots that are used. A negative number indicates an error. For optimisation, for the private hash jhash2() uses only two arguments the address and the offset. This omits the VMA which is always the same. [peterz: Use 0 for global hash. A bit shuffling and renaming. ] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-13-bigeasy@linutronix.de
2025-05-03futex: Create helper function to initialize a hash slotSebastian Andrzej Siewior
Factor out the futex_hash_bucket initialisation into a helpr function. The helper function will be used in a follow up patch implementing process private hash buckets. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-12-bigeasy@linutronix.de
2025-05-03futex: Introduce futex_q_lockptr_lock()Sebastian Andrzej Siewior
futex_lock_pi() and __fixup_pi_state_owner() acquire the futex_q::lock_ptr without holding a reference assuming the previously obtained hash bucket and the assigned lock_ptr are still valid. This isn't the case once the private hash can be resized and becomes invalid after the reference drop. Introduce futex_q_lockptr_lock() to lock the hash bucket recorded in futex_q::lock_ptr. The lock pointer is read in a RCU section to ensure that it does not go away if the hash bucket has been replaced and the old pointer has been observed. After locking the pointer needs to be compared to check if it changed. If so then the hash bucket has been replaced and the user has been moved to the new one and lock_ptr has been updated. The lock operation needs to be redone in this case. The locked hash bucket is not returned. A special case is an early return in futex_lock_pi() (due to signal or timeout) and a successful futex_wait_requeue_pi(). In both cases a valid futex_q::lock_ptr is expected (and its matching hash bucket) but since the waiter has been removed from the hash this can no longer be guaranteed. Therefore before the waiter is removed and a reference is acquired which is later dropped by the waiter to avoid a resize. Add futex_q_lockptr_lock() and use it. Acquire an additional reference in requeue_pi_wake_futex() and futex_unlock_pi() while the futex_q is removed, denote this extra reference in futex_q::drop_hb_ref and let the waiter drop the reference in this case. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-11-bigeasy@linutronix.de
2025-05-03futex: Decrease the waiter count before the unlock operationSebastian Andrzej Siewior
To support runtime resizing of the process private hash, it's required to not use the obtained hash bucket once the reference count has been dropped. The reference will be dropped after the unlock of the hash bucket. The amount of waiters is decremented after the unlock operation. There is no requirement that this needs to happen after the unlock. The increment happens before acquiring the lock to signal early that there will be a waiter. The waiter can avoid blocking on the lock if it is known that there will be no waiter. There is no difference in terms of ordering if the decrement happens before or after the unlock. Decrease the waiter count before the unlock operation. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-10-bigeasy@linutronix.de
2025-05-03futex: Acquire a hash reference in futex_wait_multiple_setup()Sebastian Andrzej Siewior
futex_wait_multiple_setup() changes task_struct::__state to !TASK_RUNNING and then enqueues on multiple futexes. Every futex_q_lock() acquires a reference on the global hash which is dropped later. If a rehash is in progress then the loop will block on mm_struct::futex_hash_bucket for the rehash to complete and this will lose the previously set task_struct::__state. Acquire a reference on the local hash to avoiding blocking on mm_struct::futex_hash_bucket. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-9-bigeasy@linutronix.de
2025-05-03futex: Create private_hash() get/put classPeter Zijlstra
This gets us: fph = futex_private_hash(key) /* gets fph and inc users */ futex_private_hash_get(fph) /* inc users */ futex_private_hash_put(fph) /* dec users */ Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-8-bigeasy@linutronix.de
2025-05-03futex: Create futex_hash() get/put classPeter Zijlstra
This gets us: hb = futex_hash(key) /* gets hb and inc users */ futex_hash_get(hb) /* inc users */ futex_hash_put(hb) /* dec users */ Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20250416162921.513656-7-bigeasy@linutronix.de