summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2025-02-26powerpc/io: Rename _insw_ns() etc.Michael Ellerman
The "_ns" suffix was "historical" in 2006, finally remove it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-17-mpe@ellerman.id.au
2025-02-26powerpc/io: Use generic raw accessorsMichael Ellerman
The raw accessors are identical to the generic versions, use the generic versions. Suggested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-16-mpe@ellerman.id.au
2025-02-26powerpc/io: Spell-out PCI_IO_ADDRMichael Ellerman
PCI_IO_ADDR is a ppc-ism, which obscures the fact that some of the powerpc accessors are identical to the generic ones. So remove it and spell out the type fully. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-15-mpe@ellerman.id.au
2025-02-26powerpc/io: Wrap port calculation in a macroMichael Ellerman
The calculation of the IO port is repeated several times, wrap it in a macro, in particular to avoid spelling out the cast multiple times. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-14-mpe@ellerman.id.au
2025-02-26powerpc/io: Remove unnecessary indirectionMichael Ellerman
Some of the __do_xxx() defines do nothing useful, they just existed to make the previous hooking macros work. So remove them. Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-13-mpe@ellerman.id.au
2025-02-26powerpc/io: Unhook MMIO accessorsMichael Ellerman
Now that PPC_INDIRECT_MMIO is removed, it's not possible/necessary to hook any of the "memory" accessors, so turn them back into regular static inlines, and restrict the hooking mechanism to the "pio" accessors only. Move the #defines that signal each routine is implemented next to the implementation, and update the out-of-date comment. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-12-mpe@ellerman.id.au
2025-02-26powerpc/io: Remove PCI_FIX_ADDRMichael Ellerman
Now that PPC_INDIRECT_MMIO is removed, PCI_FIX_ADDR does nothing, so remove it. Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-11-mpe@ellerman.id.au
2025-02-26powerpc/io: Remove PPC_INDIRECT_MMIOMichael Ellerman
The Cell blade support was the last user of PPC_INDIRECT_MMIO, so it can now be removed. PPC_INDIRECT_PIO is still used by Power8 powernv, so it needs to remain. Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-10-mpe@ellerman.id.au
2025-02-26powerpc/io: Remove PPC_IO_WORKAROUNDSMichael Ellerman
The Cell blade support was the last user of PPC_IO_WORKAROUNDS, so they can now be removed. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-9-mpe@ellerman.id.au
2025-02-26powerpc: Remove PPC_OF_PLATFORM_PCIMichael Ellerman
The Cell blade support was the last user of PPC_OF_PLATFORM_PCI, so remove it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-8-mpe@ellerman.id.au
2025-02-26powerpc: Remove DCR_MMIO and the DCR generic layerMichael Ellerman
The Cell blade support was the last user of DCR_MMIO, so it can now be removed. That only leaves DCR_NATIVE, meaning the DCR generic layer which allows using either DCR_NATIVE or DCR_MMIO is also unnecessary, remove it too. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-7-mpe@ellerman.id.au
2025-02-26powerpc/xmon: Remove SPU debug and disassemblyMichael Ellerman
Now that the IBM Cell Blade support is removed, the xmon SPU support is effectively unusable. That is because PS3 doesn't implement udbg_getc which is required to send input to xmon. So remove the xmon SPU support. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-6-mpe@ellerman.id.au
2025-02-26powerpc/cell: Remove CBE_CPUFREQ_SPU_GOVERNORMichael Ellerman
Although this driver is still buildable, it can't actually do anything in practice now that the low-level cpufreq driver for Cell has been disabled due to the removal of CBE_RAS. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-5-mpe@ellerman.id.au
2025-02-26powerpc: Remove IBM_CELL_BLADE & SPIDER_NET referencesMichael Ellerman
The symbols are no longer selectable so remove references to them. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-4-mpe@ellerman.id.au
2025-02-26powerpc: Remove PPC_PMI and driverMichael Ellerman
CONFIG_PPC_PMI is no longer selectable now that PPC_IBM_CELL_BLADE has been removed, via the dependency on PPC_IBM_CELL_POWERBUTTON. So remove it and the driver, and the pmi.h header which it was the only user of. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-3-mpe@ellerman.id.au
2025-02-26powerpc: Remove some Cell leftoversMichael Ellerman
Now that CONFIG_PPC_CELL_NATIVE is removed, iommu_fixed_is_weak will always be false, so remove it entirely. Also remove a hack/quirk in the HTAB code that was only used on Cell. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-2-mpe@ellerman.id.au
2025-02-26powerpc/cell: Remove support for IBM Cell BladesMichael Ellerman
IBM Cell Blades used the Cell processor and the "blade" server form factor. They were sold as models QS20, QS21 & QS22 from roughly 2006 to 2012 [1]. They were used in a few supercomputers (eg. Roadrunner) that have since been dismantled, and were not that widely used otherwise. Until recently I still had a working QS22, which meant I was able to keep the platform support working, but unfortunately that machine has now died. I'm not aware of any users. If there is a user that wants to keep the upstream support working, we can look at bringing some of the code back as appropriate. See previous discussion at [2]. Remove the top-level config symbol PPC_IBM_CELL_BLADE, and then the dependent symbols PPC_CELL_NATIVE, PPC_CELL_COMMON, CBE_RAS, PPC_IBM_CELL_RESETBUTTON, PPC_IBM_CELL_POWERBUTTON, CBE_THERM, and AXON_MSI. Then remove the associated C files and headers, and trim unused header content (some is shared with PS3). Note that PPC_CELL_COMMON sounds like it would build code shared with PS3, but it does not. It's a relic from when code was shared between the Blade support and QPACE support. Most of the primary authors already have CREDITS entries, with the exception of Christian, so add one for him. [1]: https://www.theregister.com/2011/06/28/ibm_kills_qs22_blade [2]: https://lore.kernel.org/linuxppc-dev/60581044-df82-40ad-b94c-56468007a93e@app.fastmail.com Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Jeremy Kerr <jk@ozlabs.org> Acked-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-1-mpe@ellerman.id.au
2025-02-26powerpc/static_call: Implement inline static callsChristophe Leroy
Implement inline static calls: - Put a 'bl' to the destination function ('b' if tail call) - Put a 'nop' when the destination function is NULL ('blr' if tail call) - Put a 'li r3,0' when the destination is the RET0 function and not a tail call. If the destination is too far (over the 32Mb limit), go via the trampoline. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/3dbd0b2ba577c942729235d0211d04a406653d81.1733245362.git.christophe.leroy@csgroup.eu
2025-02-26powerpc: Prepare arch_static_call_transform() for supporting inline static callsChristophe Leroy
Reorganise arch_static_call_transform() in order to ease the support of inline static calls in following patch: - remove 'target' to nhide whether it is a 'return 0' or not. - Don't bail out if 'tramp' is NULL, just do nothing until next patch. Note that 'target' was 'tramp + PPC_SCT_RET0', is_short was perforce true. So in the 'if (func && !is_short)' leg, target was perforce equal to 'func'. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/7a8b9245e773307c315c2548a4c6cad570ac2648.1733245362.git.christophe.leroy@csgroup.eu
2025-02-26static_call_inline: Provide trampoline address when updating sitesChristophe Leroy
In preparation of support of inline static calls on powerpc, provide trampoline address when updating sites, so that when the destination function is too far for a direct function call, the call site is patched with a call to the trampoline. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/5efe0cffc38d6f69b1ec13988a99f1acff551abf.1733245362.git.christophe.leroy@csgroup.eu
2025-02-24arch/powerpc: Remove unused function icp_native_cause_ipi_rm()Gautam Menghani
Remove icp_native_cause_ipi_rm() as it has no callers since commit 53af3ba2e819("KVM: PPC: Book3S HV: Allow guest exit path to have MMU on") Signed-off-by: Gautam Menghani <gautam@linux.ibm.com> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250101134251.436679-1-gautam@linux.ibm.com
2025-02-24powerpc/time: Define div128_by_32() static and __initChristophe Leroy
div128_by_32() used to be called from outside time.c in the old days but since v2.6.15 it hasn't been used outside time.c $ git grep div128_by_32 v2.6.14 v2.6.14:arch/ppc64/kernel/iSeries_setup.c: div128_by_32(1024 * 1024, 0, tb_ticks_per_sec, &divres); v2.6.14:arch/ppc64/kernel/pmac_time.c: div128_by_32( 1024*1024, 0, tb_ticks_per_sec, &divres ); v2.6.14:arch/ppc64/kernel/time.c: div128_by_32( XSEC_PER_SEC, 0, tb_ticks_per_sec, &divres ); v2.6.14:arch/ppc64/kernel/time.c: div128_by_32(1024*1024, 0, tb_ticks_per_sec, &divres); v2.6.14:arch/ppc64/kernel/time.c: div128_by_32(1000000000, 0, tb_ticks_per_sec, &res); v2.6.14:arch/ppc64/kernel/time.c: div128_by_32( 1024*1024, 0, new_tb_ticks_per_sec, &divres ); v2.6.14:arch/ppc64/kernel/time.c:void div128_by_32( unsigned long dividend_high, unsigned long dividend_low, v2.6.14:include/asm-ppc64/time.h:void div128_by_32( unsigned long dividend_high, unsigned long dividend_low, $ git grep div128_by_32 v2.6.15 v2.6.15:arch/powerpc/kernel/time.c: div128_by_32( XSEC_PER_SEC, 0, tb_ticks_per_sec, &divres ); v2.6.15:arch/powerpc/kernel/time.c: div128_by_32(1024*1024, 0, tb_ticks_per_sec, &res); v2.6.15:arch/powerpc/kernel/time.c: div128_by_32(1000000000, 0, tb_ticks_per_sec, &res); v2.6.15:arch/powerpc/kernel/time.c: div128_by_32(1024*1024, 0, new_tb_ticks_per_sec, &divres); v2.6.15:arch/powerpc/kernel/time.c:void div128_by_32(u64 dividend_high, u64 dividend_low, v2.6.15:include/asm-powerpc/time.h:extern void div128_by_32(u64 dividend_high, u64 dividend_low, Move it above its only caller which is time_init() and define it static and __init. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/50810349bf1eee378fbeab72a36e4b6553a60c3d.1738749246.git.christophe.leroy@csgroup.eu
2025-02-24powerpc/ipic: Stop printing address of registersChristophe Leroy
The following line appears at boot: IPIC (128 IRQ sources) at (ptrval) This is pointless so remove the printing of the virtual address and replace it by matching physical address. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/ecffb21d88405f99e7ffc906a733396c57c36d50.1736405302.git.christophe.leroy@csgroup.eu
2025-02-24powerpc/32: Stop printing Kernel virtual memory layoutChristophe Leroy
Printing of Kernel virtual memory layout was added for debug purpose by commit f637a49e507c ("powerpc: Minor cleanups of kernel virt address space definitions") For security reasons, don't display the kernel's virtual memory layout. Other architectures have removed it through following commits. Commit 071929dbdd86 ("arm64: Stop printing the virtual memory layout") Commit 1c31d4e96b8c ("ARM: 8820/1: mm: Stop printing the virtual memory layout") Commit 31833332f798 ("m68k/mm: Stop printing the virtual memory layout") Commit fd8d0ca25631 ("parisc: Hide virtual kernel memory layout") Commit 681ff0181bbf ("x86/mm/init/32: Stop printing the virtual memory layout") Commit 681ff0181bbf ("x86/mm/init/32: Stop printing the virtual memory layout") thought x86 was the last one, but in reality powerpc/32 still had it. So remove it now on powerpc/32 as well. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Kees Cook <kees@kernel.org> [Maddy: Added "Commit" in commit message to avoid checkpatch error] Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/430bc8c1f2ff2eb9224b04450e22db472b0b9fa9.1736361630.git.christophe.leroy@csgroup.eu
2025-02-24powerpc/vmlinux: Remove etext, edata and endChristophe Leroy
etext is not used anymore since commit 843a1ffaf6f2 ("powerpc/mm: use core_kernel_text() helper") edata and end have not been used since the merge of arch/ppc/ and arch/ppc64/ Remove the three and remove macro PROVIDE32. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/d1686d36cdd6b9d681e7ee4dd677c386d43babb1.1736332415.git.christophe.leroy@csgroup.eu
2025-02-24powerpc/44x: Declare primary_uic static in uic.cChristophe Leroy
primary_uic is not used outside uic.c, declare it static. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202411101746.lD8YdVzY-lkp@intel.com/ Fixes: e58923ed1437 ("[POWERPC] Add arch/powerpc driver for UIC, PPC4xx interrupt controller") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/0e11233d30333610ab460b3a1fd0f43c3a51e34d.1736331884.git.christophe.leroy@csgroup.eu
2025-02-11powerpc/pseries/iommu: memory notifier incorrectly adds TCEs for pmemoryGaurav Batra
iommu_mem_notifier() is invoked when RAM is dynamically added/removed. This notifier call is responsible to add/remove TCEs from the Dynamic DMA Window (DDW) when TCEs are pre-mapped. TCEs are pre-mapped only for RAM and not for persistent memory (pmemory). For DMA buffers in pmemory, TCEs are dynamically mapped when the device driver instructs to do so. The issue is 'daxctl' command is capable of adding pmemory as "System RAM" after LPAR boot. The command to do so is - daxctl reconfigure-device --mode=system-ram dax0.0 --force This will dynamically add pmemory range to LPAR RAM eventually invoking iommu_mem_notifier(). The address range of pmemory is way beyond the Max RAM that the LPAR can have. Which means, this range is beyond the DDW created for the device, at device initialization time. As a result when TCEs are pre-mapped for the pmemory range, by iommu_mem_notifier(), PHYP HCALL returns H_PARAMETER. This failed the command, daxctl, to add pmemory as RAM. The solution is to not pre-map TCEs for pmemory. Signed-off-by: Gaurav Batra <gbatra@linux.ibm.com> Tested-by: Donet Tom <donettom@linux.ibm.com> Reviewed-by: Donet Tom <donettom@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250130183854.92258-1-gbatra@linux.ibm.com
2025-02-11powerpc/pseries/iommu: create DDW for devices with DMA mask less than 64-bitsGaurav Batra
Starting with PAPR level 2.13, platform supports placing PHB in limited address mode. Devices that support DMA masks less that 64-bit but greater than 32-bits are placed in limited address mode. In this mode, the starting DMA address returned by the DDW is 4GB. When the device driver calls dma_supported, with mask less then 64-bit, the PowerPC IOMMU driver places PHB in the Limited Addressing Mode before creating DDW. Signed-off-by: Gaurav Batra <gbatra@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250108164814.73250-1-gbatra@linux.ibm.com
2025-02-11powerpc/pseries: Export hardware trace macro dump via debugfsAbhishek Dubey
This patch adds debugfs interface to export Hardware Trace Macro (HTM) function data in a LPAR. New hypervisor call "H_HTM" has been defined to setup, configure, control and dump the HTM data. This patch supports only dumping of HTM data in a LPAR. New debugfs folder called "htmdump" has been added under /sys/kernel/debug/arch path which contains files need to pass required parameters for the H_HTM dump function. New Kconfig option called "CONFIG_HTMDUMP" is added in platform/pseries for the same. With this module loaded, list of files in debugfs path /sys/kernel/debug/powerpc/htmdump coreindexonchip htmtype nodalchipindex nodeindex trace Signed-off-by: Abhishek Dubey <adubey@linux.ibm.com> Co-developed-by: Madhavan Srinivasan <maddy@linux.ibm.com> Reviewed-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250113164039.302017-2-adubey@linux.ibm.com
2025-02-11powerpc/pseries: Macros and wrapper functions for H_HTM callAbhishek Dubey
Define macros and wrapper functions to handle H_HTM (Hardware Trace Macro) hypervisor call. H_HTM is new HCALL added to export data from Hardware Trace Macro (HTM) function. Signed-off-by: Abhishek Dubey <adubey@linux.ibm.com> Co-developed-by: Madhavan Srinivasan <maddy@linux.ibm.com> Reviewed-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250113164039.302017-1-adubey@linux.ibm.com
2025-02-11arch/powerpc/perf: Update get_mem_data_src function to use saved values of ↵Athira Rajeev
sier and mmcra regs During performance monitor interrupt handling, the regs are setup using perf_read_regs function. Here some of the pt_regs fields is overloaded. Samples Instruction Event Register (SIER) is loaded into pt_regs, overloading regs->dar. And regs->dsisr to store MMCRA (Monitor Mode Control Register A) so that we only need to read these once on each interrupt. Update the isa207_get_mem_data_src function to use regs->dar instead of reading from SPRN_SIER again. Also use regs->dsisr to read the mmcra value Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250121131621.39054-2-atrajeev@linux.vnet.ibm.com
2025-02-11arch/powerpc/perf: Check the instruction type before creating sample with ↵Athira Rajeev
perf_mem_data_src perf mem report aborts as below sometimes (during some corner case) in powerpc: # ./perf mem report 1>out *** stack smashing detected ***: terminated Aborted (core dumped) The backtrace is as below: __pthread_kill_implementation () raise () abort () __libc_message __fortify_fail __stack_chk_fail hist_entry.lvl_snprintf __sort__hpp_entry __hist_entry__snprintf hists.fprintf cmd_report cmd_mem Snippet of code which triggers the issue from tools/perf/util/sort.c static int hist_entry__lvl_snprintf(struct hist_entry *he, char *bf, size_t size, unsigned int width) { char out[64]; perf_mem__lvl_scnprintf(out, sizeof(out), he->mem_info); return repsep_snprintf(bf, size, "%-*s", width, out); } The value of "out" is filled from perf_mem_data_src value. Debugging this further showed that for some corner cases, the value of "data_src" was pointing to wrong value. This resulted in bigger size of string and causing stack check fail. The perf mem data source values are captured in the sample via isa207_get_mem_data_src function. The initial check is to fetch the type of sampled instruction. If the type of instruction is not valid (not a load/store instruction), the function returns. Since 'commit e16fd7f2cb1a ("perf: Use sample_flags for data_src")', data_src field is not initialized by the perf_sample_data_init() function. If the PMU driver doesn't set the data_src value to zero if type is not valid, this will result in uninitailised value for data_src. The uninitailised value of data_src resulted in stack check fail followed by abort for "perf mem report". When requesting for data source information in the sample, the instruction type is expected to be load or store instruction. In ISA v3.0, due to hardware limitation, there are corner cases where the instruction type other than load or store is observed. In ISA v3.0 and before values "0" and "7" are considered reserved. In ISA v3.1, value "7" has been used to indicate "larx/stcx". Drop the sample if instruction type has reserved values for this field with a ISA version check. Initialize data_src to zero in isa207_get_mem_data_src if the instruction type is not load/store. Reported-by: Disha Goel <disgoel@linux.vnet.ibm.com> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250121131621.39054-1-atrajeev@linux.vnet.ibm.com
2025-02-11powerpc: increase MIN RMA size for CAS negotiationAvnish Chouhan
Change RMA size from 512 MB to 768 MB which will result in more RMA at boot time for PowerPC. When PowerPC LPAR use/uses vTPM, Secure Boot or FADump, the 512 MB RMA memory is not sufficient for booting. With this 512 MB RMA, GRUB2 run out of memory and unable to load the necessary. Sometimes even usage of CDROM which requires more memory for installation along with the options mentioned above troubles the boot memory and result in boot failures. Increasing the RMA size will resolves multiple out of memory issues observed in PowerPC. Failure details: 1. GRUB2 kern/ieee1275/init.c:550: mm requested region of size 8513000, flags 1 kern/ieee1275/init.c:563: Cannot satisfy allocation and retain minimum runtime space kern/ieee1275/init.c:550: mm requested region of size 8513000, flags 0 kern/ieee1275/init.c:563: Cannot satisfy allocation and retain minimum runtime space kern/file.c:215: Closing `/ppc/ppc64/initrd.img' ... kern/disk.c:297: Closing `ieee1275//vdevice/v-scsi @30000067/disk@8300000000000000'... kern/disk.c:311: Closing `ieee1275//vdevice/v-scsi @30000067/disk@8300000000000000' succeeded. kern/file.c:225: Closing `/ppc/ppc64/initrd.img' failed with 3. kern/file.c:148: Opening `/ppc/ppc64/initrd.img' succeeded. error: ../../grub-core/kern/mm.c:552:out of memory. 2. Kernel [ 0.777633] List of all partitions: [ 0.777639] No filesystem could mount root, tried: [ 0.777640] [ 0.777649] Kernel panic - not syncing: VFS: Unable to mount root fs on "" or unknown-block(0,0) [ 0.777658] CPU: 17 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-0.rc4.20.el10.ppc64le #1 [ 0.777669] Hardware name: IBM,9009-22A POWER9 (architected) 0x4e0202 0xf000005 of:IBM,FW950.B0 (VL950_149) hv:phyp pSeries [ 0.777678] Call Trace: [ 0.777682] [c000000003db7b60] [c000000001119714] dump_stack_lvl+0x88/0xc4 (unreliable) [ 0.777700] [c000000003db7b90] [c00000000016c274] panic+0x174/0x460 [ 0.777711] [c000000003db7c30] [c00000000200631c] mount_root_generic+0x320/0x354 [ 0.777724] [c000000003db7d00] [c0000000020066f8] prepare_namespace+0x27c/0x2f4 [ 0.777735] [c000000003db7d90] [c000000002005824] kernel_init_freeable+0x254/0x294 [ 0.777747] [c000000003db7df0] [c00000000001131c] kernel_init+0x30/0x1c4 [ 0.777757] [c000000003db7e50] [c00000000000debc] ret_from_kernel_user_thread+0x14/0x1c [ 0.777768] --- interrupt: 0 at 0x0 [ 0.784238] pstore: backend (nvram) writing error (-1) [ 0.790447] Rebooting in 10 seconds.. Signed-off-by: Avnish Chouhan <avnish@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250123114254.200527-4-sourabhjain@linux.ibm.com
2025-02-11powerpc/fadump: fix additional param memory reservation for HASH MMUSourabh Jain
Commit 683eab94da75bc ("powerpc/fadump: setup additional parameters for dump capture kernel") introduced the additional parameter feature in fadump for HASH MMU with the understanding that GRUB does not use the memory area between 640MB and 768MB for its operation. However, the third patch in this series ("powerpc: increase MIN RMA size for CAS negotiation") changes the MIN RMA size to 768MB, allowing GRUB to use memory up to 768MB. This makes the fadump reservation for the additional parameter feature for HASH MMU unreliable. To address this, adjust the memory range for the additional parameter in fadump for HASH MMU. This will ensure that GRUB does not overwrite the memory reserved for fadump's additional parameter in HASH MMU. The new policy for the memory range for the additional parameter in HASH MMU is that the first memory block must be larger than the MIN_RMA size, as the bootloader can use memory up to the MIN_RMA size. The range should be between MIN_RMA and the RMA size (ppc64_rma_size), and it must not overlap with the fadump reserved area. Reviewed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com> Reviewed-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250123114254.200527-3-sourabhjain@linux.ibm.com
2025-02-11powerpc: export MIN RMA sizeSourabh Jain
Commit 683eab94da75bc ("powerpc/fadump: setup additional parameters for dump capture kernel") introduced the additional parameter feature in fadump for HASH MMU with the understanding that GRUB does not use the memory area between 640MB and 768MB for its operation. However, the third patch ("powerpc: increase MIN RMA size for CAS negotiation") in this series is changing the MIN RMA size to 768MB, allowing GRUB to use memory up to 768MB. This makes the fadump reservation for the additional parameter feature for HASH MMU unreliable. To address this, export the MIN_RMA so that the next patch ("powerpc/fadump: fix additional param memory reservation for HASH MMU") can identify the correct memory range for the additional parameter feature in fadump for HASH MMU. Reviewed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250123114254.200527-2-sourabhjain@linux.ibm.com
2025-02-09Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "ARM: - Correctly clean the BSS to the PoC before allowing EL2 to access it on nVHE/hVHE/protected configurations - Propagate ownership of debug registers in protected mode after the rework that landed in 6.14-rc1 - Stop pretending that we can run the protected mode without a GICv3 being present on the host - Fix a use-after-free situation that can occur if a vcpu fails to initialise the NV shadow S2 MMU contexts - Always evaluate the need to arm a background timer for fully emulated guest timers - Fix the emulation of EL1 timers in the absence of FEAT_ECV - Correctly handle the EL2 virtual timer, specially when HCR_EL2.E2H==0 s390: - move some of the guest page table (gmap) logic into KVM itself, inching towards the final goal of completely removing gmap from the non-kvm memory management code. As an initial set of cleanups, move some code from mm/gmap into kvm and start using __kvm_faultin_pfn() to fault-in pages as needed; but especially stop abusing page->index and page->lru to aid in the pgdesc conversion. x86: - Add missing check in the fix to defer starting the huge page recovery vhost_task - SRSO_USER_KERNEL_NO does not need SYNTHESIZED_F" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (31 commits) KVM: x86/mmu: Ensure NX huge page recovery thread is alive before waking KVM: remove kvm_arch_post_init_vm KVM: selftests: Fix spelling mistake "initally" -> "initially" kvm: x86: SRSO_USER_KERNEL_NO is not synthesized KVM: arm64: timer: Don't adjust the EL2 virtual timer offset KVM: arm64: timer: Correctly handle EL1 timer emulation when !FEAT_ECV KVM: arm64: timer: Always evaluate the need for a soft timer KVM: arm64: Fix nested S2 MMU structures reallocation KVM: arm64: Fail protected mode init if no vgic hardware is present KVM: arm64: Flush/sync debug state in protected mode KVM: s390: selftests: Streamline uc_skey test to issue iske after sske KVM: s390: remove the last user of page->index KVM: s390: move PGSTE softbits KVM: s390: remove useless page->index usage KVM: s390: move gmap_shadow_pgt_lookup() into kvm KVM: s390: stop using lists to keep track of used dat tables KVM: s390: stop using page->index for non-shadow gmaps KVM: s390: move some gmap shadowing functions away from mm/gmap.c KVM: s390: get rid of gmap_translate() KVM: s390: get rid of gmap_fault() ...
2025-02-08Merge tag 'execve-v6.14-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull execve fix from Kees Cook: "This is an alpha-specific fix, but since it touched ELF I was asked to carry it. - alpha/elf: Fix misc/setarch test of util-linux by removing 32bit support (Eric W. Biederman)" * tag 'execve-v6.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: alpha/elf: Fix misc/setarch test of util-linux by removing 32bit support
2025-02-08Merge tag 'x86-urgent-2025-02-08' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fix from Ingo Molnar: "Fix a build regression on GCC 15 builds, caused by GCC changing the default C version that is overriden in the main Makefile but not in the x86 boot code Makefile" * tag 'x86-urgent-2025-02-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/boot: Use '-std=gnu11' to fix build with GCC 15
2025-02-07genirq: Remove leading space from irq_chip::irq_print_chip() callbacksGeert Uytterhoeven
The space separator was factored out from the multiple chip name prints, but several irq_chip::irq_print_chip() callbacks still print a leading space. Remove the superfluous double spaces. Fixes: 9d9f204bdf7243bf ("genirq/proc: Add missing space separator back") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/893f7e9646d8933cd6786d5a1ef3eb076d263768.1738764803.git.geert+renesas@glider.be
2025-02-06Merge tag 'for-linus-6.14-rc2-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: "Three fixes for xen_hypercall_hvm() that was introduced in the 6.13 cycle" * tag 'for-linus-6.14-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: x86/xen: remove unneeded dummy push from xen_hypercall_hvm() x86/xen: add FRAME_END to xen_hypercall_hvm() x86/xen: fix xen_hypercall_hvm() to not clobber %rbx
2025-02-06alpha/elf: Fix misc/setarch test of util-linux by removing 32bit supportEric W. Biederman
Richard Henderson <richard.henderson@linaro.org> writes[1]: > There was a Spec benchmark (I forget which) which was memory bound and ran > twice as fast with 32-bit pointers. > > I copied the idea from DEC to the ELF abi, but never did all the other work > to allow the toolchain to take advantage. > > Amusingly, a later Spec changed the benchmark data sets to not fit into a > 32-bit address space, specifically because of this. > > I expect one could delete the ELF bit and personality and no one would > notice. Not even the 10 remaining Alpha users. In [2] it was pointed out that parts of setarch weren't working properly on alpha because it has it's own SET_PERSONALITY implementation. In the discussion that followed Richard Henderson pointed out that the 32bit pointer support for alpha was never completed. Fix this by removing alpha's 32bit pointer support. As a bit of paranoia refuse to execute any alpha binaries that have the EF_ALPHA_32BIT flag set. Just in case someone somewhere has binaries that try to use alpha's 32bit pointer support. Link: https://lkml.kernel.org/r/CAFXwXrkgu=4Qn-v1PjnOR4SG0oUb9LSa0g6QXpBq4ttm52pJOQ@mail.gmail.com [1] Link: https://lkml.kernel.org/r/20250103140148.370368-1-glaubitz@physik.fu-berlin.de [2] Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Tested-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Link: https://lore.kernel.org/r/87y0zfs26i.fsf_-_@email.froward.int.ebiederm.org Signed-off-by: Kees Cook <kees@kernel.org>
2025-02-05x86/xen: remove unneeded dummy push from xen_hypercall_hvm()Juergen Gross
Stack alignment of the kernel in 64-bit mode is 8, not 16, so the dummy push in xen_hypercall_hvm() for aligning the stack to 16 bytes can be removed. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2025-02-05x86/xen: add FRAME_END to xen_hypercall_hvm()Juergen Gross
xen_hypercall_hvm() is missing a FRAME_END at the end, add it. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202502030848.HTNTTuo9-lkp@intel.com/ Fixes: b4845bb63838 ("x86/xen: add central hypercall functions") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2025-02-05x86/xen: fix xen_hypercall_hvm() to not clobber %rbxJuergen Gross
xen_hypercall_hvm(), which is used when running as a Xen PVH guest at most only once during early boot, is clobbering %rbx. Depending on whether the caller relies on %rbx to be preserved across the call or not, this clobbering might result in an early crash of the system. This can be avoided by using an already saved register instead of %rbx. Fixes: b4845bb63838 ("x86/xen: add central hypercall functions") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2025-02-04KVM: x86/mmu: Ensure NX huge page recovery thread is alive before wakingSean Christopherson
When waking a VM's NX huge page recovery thread, ensure the thread is actually alive before trying to wake it. Now that the thread is spawned on-demand during KVM_RUN, a VM without a recovery thread is reachable via the related module params. BUG: kernel NULL pointer dereference, address: 0000000000000040 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 RIP: 0010:vhost_task_wake+0x5/0x10 Call Trace: <TASK> set_nx_huge_pages+0xcc/0x1e0 [kvm] param_attr_store+0x8a/0xd0 module_attr_store+0x1a/0x30 kernfs_fop_write_iter+0x12f/0x1e0 vfs_write+0x233/0x3e0 ksys_write+0x60/0xd0 do_syscall_64+0x5b/0x160 entry_SYSCALL_64_after_hwframe+0x4b/0x53 RIP: 0033:0x7f3b52710104 </TASK> Modules linked in: kvm_intel kvm CR2: 0000000000000040 Fixes: 931656b9e2ff ("kvm: defer huge page recovery vhost task to later") Cc: stable@vger.kernel.org Cc: Keith Busch <kbusch@kernel.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20250124234623.3609069-1-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-04KVM: remove kvm_arch_post_init_vmPaolo Bonzini
The only statement in a kvm_arch_post_init_vm implementation can be moved into the x86 kvm_arch_init_vm. Do so and remove all traces from architecture-independent code. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-04Merge tag 'kvmarm-fixes-6.14-1' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 fixes for 6.14, take #1 - Correctly clean the BSS to the PoC before allowing EL2 to access it on nVHE/hVHE/protected configurations - Propagate ownership of debug registers in protected mode after the rework that landed in 6.14-rc1 - Stop pretending that we can run the protected mode without a GICv3 being present on the host - Fix a use-after-free situation that can occur if a vcpu fails to initialise the NV shadow S2 MMU contexts - Always evaluate the need to arm a background timer for fully emulated guest timers - Fix the emulation of EL1 timers in the absence of FEAT_ECV - Correctly handle the EL2 virtual timer, specially when HCR_EL2.E2H==0
2025-02-04Merge tag 'kvm-s390-next-6.14-2' of ↵Paolo Bonzini
https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD - some selftest fixes - move some kvm-related functions from mm into kvm - remove all usage of page->index and page->lru from kvm - fixes and cleanups for vsie
2025-02-04kvm: x86: SRSO_USER_KERNEL_NO is not synthesizedPaolo Bonzini
SYNTHESIZED_F() generally is used together with setup_force_cpu_cap(), i.e. when it makes sense to present the feature even if cpuid does not have it *and* the VM is not able to see the difference. For example, it can be used when mitigations on the host automatically protect the guest as well. The "SYNTHESIZED_F(SRSO_USER_KERNEL_NO)" line came in as a conflict resolution between the CPUID overhaul from the KVM tree and support for the feature in the x86 tree. Using it right now does not hurt, or make a difference for that matter, because there is no setup_force_cpu_cap(X86_FEATURE_SRSO_USER_KERNEL_NO). However, it is a little less future proof in case such a setup_force_cpu_cap() appears later, for a case where the kernel somehow is not vulnerable but the guest would have to apply the mitigation. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-04KVM: arm64: timer: Don't adjust the EL2 virtual timer offsetMarc Zyngier
The way we deal with the EL2 virtual timer is a bit odd. We try to cope with E2H being flipped, and adjust which offset applies to that timer depending on the current E2H value. But that's a complexity we shouldn't have to worry about. What we have to deal with is either E2H being RES1, in which case there is no offset, or E2H being RES0, and the virtual timer simply does not exist. Drop the adjusting of the timer offset, which makes things a bit simpler. At the same time, make sure that accessing the HV timer when E2H is RES0 results in an UNDEF in the guest. Suggested-by: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20250204110050.150560-4-maz@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>