summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-02-10platform/x86/amd: pmc: Add line break for readabilityShyam Sundar S K
Add a line break for the code readability. Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Link: https://lore.kernel.org/r/20230206150855.1938810-5-Shyam-sundar.S-k@amd.com Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-02-10platform/x86/amd: pmc: differentiate STB/SMU messaging printsShyam Sundar S K
Modify the dynamic debug print to differentiate between the regular and spill to DRAM usage of the SMU message port. Suggested-by: Sanket Goswami <Sanket.Goswami@amd.com> Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Link: https://lore.kernel.org/r/20230206150855.1938810-4-Shyam-sundar.S-k@amd.com Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-02-10platform/x86/amd: pmc: Write dummy postcode into the STB DRAMShyam Sundar S K
Based on the recommendation from the PMFW team in order to get the recent telemetry data present on the STB DRAM the driver is required to send one dummy write to the STB buffer, so it internally triggers the PMFW to emit the latest telemetry data in the STB DRAM region. Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Link: https://lore.kernel.org/r/20230206150855.1938810-3-Shyam-sundar.S-k@amd.com Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-02-10platform/x86/amd: pmc: Add num_samples message id support to STBShyam Sundar S K
Recent PMFWs have the support for S2D_NUM_SAMPLES message ID that can tell the current number of samples present within the STB DRAM. num_samples returned would let the driver know the start of the read from the last push location. This way, the driver would emit the top most region of the STB DRAM. Co-developed-by: Sanket Goswami <Sanket.Goswami@amd.com> Signed-off-by: Sanket Goswami <Sanket.Goswami@amd.com> Signed-off-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Link: https://lore.kernel.org/r/20230206150855.1938810-2-Shyam-sundar.S-k@amd.com Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-02-10platform/x86: Add include/linux/platform_data/x86 to MAINTAINERSAndy Shevchenko
Most of the files there are being used under PDx86 subsystem or tightly related drivers (like drivers/clk/x86/). I think it makes sense to assure that PDx86 keeps an eye on the changes there. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20230206150202.27892-1-andriy.shevchenko@linux.intel.com Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-02-10platform/x86: Fix header inclusion in linux/platform_data/x86/soc.hAndy Shevchenko
First of all, we don't use intel-family.h directly. On the other hand we actively use boolean type, that is defined in the types.h (we take top-level header for that) and x86_cpu_id, that is provided in the mod_devicetable.h. Secondly, we don't need to spread SOC_INTEL_IS_CPU() macro to the users. Hence, undefine it when it's appropriate. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20230206145238.19460-1-andriy.shevchenko@linux.intel.com Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-02-10platform/x86: int3472/discrete: add LEDS_CLASS dependencyArnd Bergmann
int3472 now fails to link when the LED support is disabled: x86_64-linux-ld: drivers/platform/x86/intel/int3472/led.o: in function `skl_int3472_register_pled': led.c:(.text+0xf4): undefined reference to `led_classdev_register_ext' x86_64-linux-ld: led.c:(.text+0x131): undefined reference to `led_add_lookup' x86_64-linux-ld: drivers/platform/x86/intel/int3472/led.o: in function `skl_int3472_unregister_pled': led.c:(.text+0x16b): undefined reference to `led_remove_lookup' x86_64-linux-ld: led.c:(.text+0x177): undefined reference to `led_classdev_unregister' Add an explicit Kconfig dependency. Fixes: 5ae20a8050d0 ("platform/x86: int3472/discrete: Create a LED class device for the privacy LED") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Sakari Ailus <sakari.ailus@linux.intel.com> Link: https://lore.kernel.org/r/20230208163658.2129009-1-arnd@kernel.org Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2023-02-10ASoC: rt712-sdca: fix coding style and unconditionally return issuesShuming Fan
This patch fixes 1. coding style issues 2. check if the setting was set already in rt712_sdca_mux_put callback Signed-off-by: Shuming Fan <shumingf@realtek.com> Link: https://lore.kernel.org/r/20230210082141.24077-1-shumingf@realtek.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-02-10ASoC: rsnd: core.c: indicate warning if strange TDM width was setKuninori Morimoto
Current rsnd silently uses default TDM width if it was strange settings. It is difficult to notice about it. This patch indicates warning for it. Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Link: https://lore.kernel.org/r/87lel6ksqn.wl-kuninori.morimoto.gx@renesas.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-02-10efi: Add mixed-mode thunk recipe for GetMemoryAttributesArd Biesheuvel
EFI mixed mode on x86 requires a recipe for each protocol method or firmware service that takes u64 arguments by value, or returns pointer or 'native int' (UINTN) values by reference (e.g,, through a void ** or unsigned long * parameter), due to the fact that these types cannot be translated 1:1 between the i386 and MS x64 calling conventions. So add the missing recipe for GetMemoryAttributes, which is not actually being used yet on x86, but the code exists and can be built for x86 so let's make sure it works as it should. Cc: Evgeniy Baskov <baskov@ispras.ru> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-02-10ipmi: ipmb: Fix the MODULE_PARM_DESC associated to 'retry_time_ms'Christophe JAILLET
'This should be 'retry_time_ms' instead of 'max_retries'. Fixes: 63c4eb347164 ("ipmi:ipmb: Add initial support for IPMI over IPMB") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Message-Id: <0d8670cff2c656e99a832a249e77dc90578f67de.1675591429.git.christophe.jaillet@wanadoo.fr> Cc: stable@vger.kernel.org Signed-off-by: Corey Minyard <cminyard@mvista.com>
2023-02-10ipmi:ssif: Add a timer between request retriesCorey Minyard
The IPMI spec has a time (T6) specified between request retries. Add the handling for that. Reported by: Tony Camuso <tcamuso@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Corey Minyard <cminyard@mvista.com>
2023-02-10ipmi:ssif: Remove rtc_us_timerCorey Minyard
It was cruft left over from older handling of run to completion. Cc: stable@vger.kernel.org Signed-off-by: Corey Minyard <cminyard@mvista.com>
2023-02-10ipmi_ssif: Rename idle state and checkCorey Minyard
Rename the SSIF_IDLE() to IS_SSIF_IDLE(), since that is more clear, and rename SSIF_NORMAL to SSIF_IDLE, since that's more accurate. Cc: stable@vger.kernel.org Signed-off-by: Corey Minyard <cminyard@mvista.com>
2023-02-10ipmi:ssif: resend_msg() cannot failCorey Minyard
The resend_msg() function cannot fail, but there was error handling around using it. Rework the handling of the error, and fix the out of retries debug reporting that was wrong around this, too. Cc: stable@vger.kernel.org Signed-off-by: Corey Minyard <cminyard@mvista.com>
2023-02-10drm/ast: Fix start address computationJocelyn Falempe
During the driver conversion to shmem, the start address for the scanout buffer was set to the base PCI address. In most cases it works because only the lower 24bits are used, and due to alignment it was almost always 0. But on some unlucky hardware, it's not the case, and some uninitialized memory is displayed on the BMC. With shmem, the primary plane is always at offset 0 in GPU memory. * v2: rewrite the patch to set the offset to 0. (Thomas Zimmermann) * v3: move the change to plane_init() and also fix the cursor plane. (Jammy Huang) Tested on a sr645 affected by this bug. Fixes: f2fa5a99ca81 ("drm/ast: Convert ast to SHMEM") Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com> Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de> Reviewed-by: Jammy Huang <jammy_huang@aspeedtech.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230209094417.21630-1-jfalempe@redhat.com
2023-02-10HID: logitech-hidpp: Add more debug statementsBastien Nocera
This should help us figure out some hairy problems with some devices. Signed-off-by: Bastien Nocera <hadess@hadess.net> Link: https://lore.kernel.org/r/20230206221256.129198-1-hadess@hadess.net Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
2023-02-10HID: Add support for Logitech G923 Xbox Edition steering wheelWalt Holman
We get the same level of features as the regular G920. Signed-off-by: Walt Holman <waltholman09@gmail.com> Link: https://lore.kernel.org/r/20230207195051.16373-1-waltholman09@gmail.com Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
2023-02-10HID: logitech-hidpp: Add Signature M650Bastien Nocera
Add support for HID++ over Bluetooth for the Logitech Signature M650 mouse. It comes with a dongle but can also be used over Bluetooth without one. Signed-off-by: Bastien Nocera <hadess@hadess.net> Link: https://lore.kernel.org/r/20220404100311.3304-1-hadess@hadess.net Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
2023-02-10HID: logitech-hidpp: Remove HIDPP_QUIRK_NO_HIDINPUT quirkBastien Nocera
HIDPP_QUIRK_NO_HIDINPUT isn't used by any devices but still happens to work as HIDPP_QUIRK_DELAYED_INIT is defined to the same value. Remove HIDPP_QUIRK_NO_HIDINPUT and use HIDPP_QUIRK_DELAYED_INIT everywhere instead. Tested on a T650 which requires that quirk, and a number of unifying and Bluetooth devices that don't. Signed-off-by: Bastien Nocera <hadess@hadess.net> Link: https://lore.kernel.org/r/20230125121723.3122-2-hadess@hadess.net Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
2023-02-10HID: logitech-hidpp: Don't restart communication if not necessaryBastien Nocera
Don't stop and restart communication with the device unless we need to modify the connect flags used because of a device quirk. Signed-off-by: Bastien Nocera <hadess@hadess.net> Link: https://lore.kernel.org/r/20230125121723.3122-1-hadess@hadess.net Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
2023-02-10HID: logitech-hidpp: Add constants for HID++ 2.0 error codesBastien Nocera
Add constants for HID++ 2.0 error codes listed in "Protocol HID++2.0 essential features" chapter, page 3, in logitech_hidpp_2.0_specification_draft_2012-06-04.pdf Signed-off-by: Bastien Nocera <hadess@hadess.net> Link: https://lore.kernel.org/r/20221207100033.64095-1-hadess@hadess.net Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
2023-02-10Revert "HID: logitech-hidpp: add a module parameter to keep firmware gestures"Bastien Nocera
Now that we're in 2022, and the majority of desktop environments can and should support touchpad gestures through libinput, remove the legacy module parameter that made it possible to use gestures implemented in firmware. This will eventually allow simplifying the driver's initialisation code. This reverts commit 9188dbaed68a4b23dc96eba165265c08caa7dc2a. Signed-off-by: Bastien Nocera <hadess@hadess.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Link: https://lore.kernel.org/r/20221220154345.474596-1-hadess@hadess.net
2023-02-10HID: logitech-hidpp: Hard-code HID++ 1.0 fast scroll supportBastien Nocera
HID++ 1.0 devices only export whether Fast Scrolling is enabled, not whether they are capable of it. Reinstate the original quirks for the 3 supported mice so fast scrolling works again on those devices. Fixes: 908d325e1665 ("HID: logitech-hidpp: Detect hi-res scrolling support") Link: https://bugzilla.kernel.org/show_bug.cgi?id=216903 Signed-off-by: Bastien Nocera <hadess@hadess.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Link: https://lore.kernel.org/r/20230116130937.391441-1-hadess@hadess.net
2023-02-10arm64: reorder defconfigArnd Bergmann
Some Kconfig options has moved around after a 'make savedefconfig' run, so move them to their new location to make it easier to see what other options got removed. Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-02-10Documentation/hw-vuln: Add documentation for Cross-Thread Return PredictionsTom Lendacky
Add the admin guide for the Cross-Thread Return Predictions vulnerability. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Message-Id: <60f9c0b4396956ce70499ae180cb548720b25c7e.1675956146.git.thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-10KVM: x86: Mitigate the cross-thread return address predictions bugTom Lendacky
By default, KVM/SVM will intercept attempts by the guest to transition out of C0. However, the KVM_CAP_X86_DISABLE_EXITS capability can be used by a VMM to change this behavior. To mitigate the cross-thread return address predictions bug (X86_BUG_SMT_RSB), a VMM must not be allowed to override the default behavior to intercept C0 transitions. Use a module parameter to control the mitigation on processors that are vulnerable to X86_BUG_SMT_RSB. If the processor is vulnerable to the X86_BUG_SMT_RSB bug and the module parameter is set to mitigate the bug, KVM will not allow the disabling of the HLT, MWAIT and CSTATE exits. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Message-Id: <4019348b5e07148eb4d593380a5f6713b93c9a16.1675956146.git.thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-10x86/speculation: Identify processors vulnerable to SMT RSB predictionsTom Lendacky
Certain AMD processors are vulnerable to a cross-thread return address predictions bug. When running in SMT mode and one of the sibling threads transitions out of C0 state, the other sibling thread could use return target predictions from the sibling thread that transitioned out of C0. The Spectre v2 mitigations cover the Linux kernel, as it fills the RSB when context switching to the idle thread. However, KVM allows a VMM to prevent exiting guest mode when transitioning out of C0. A guest could act maliciously in this situation, so create a new x86 BUG that can be used to detect if the processor is vulnerable. Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Message-Id: <91cec885656ca1fcd4f0185ce403a53dd9edecb7.1675956146.git.thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-02-10powerpc/kcsan: Prevent recursive instrumentation with IRQ save/restoresRohan McLure
Instrumented memory accesses provided by KCSAN will access core-local memories (which will save and restore IRQs) as well as restoring IRQs directly. Avoid recursive instrumentation by applying __no_kcsan annotation to IRQ restore routines. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> [mpe: Resolve merge conflict with IRQ replay recursion changes] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230206021801.105268-5-rmclure@linux.ibm.com
2023-02-10powerpc/kcsan: Memory barriers semanticsRohan McLure
Annotate memory barriers *mb() with calls to kcsan_mb(), signaling to compilers supporting KCSAN that the respective memory barrier has been issued. Rename memory barrier *mb() to __*mb() to opt in for asm-generic/barrier.h to generate the respective *mb() macro. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230206021801.105268-4-rmclure@linux.ibm.com
2023-02-10powerpc/kcsan: Exclude udelay to prevent recursive instrumentationRohan McLure
In order for KCSAN to increase its likelihood of observing a data race, it sets a watchpoint on memory accesses and stalls, allowing for detection of conflicting accesses by other kernel threads or interrupts. Stalls are implemented by injecting a call to udelay in instrumented code. To prevent recursive instrumentation, exclude udelay from being instrumented. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230206021801.105268-3-rmclure@linux.ibm.com
2023-02-10powerpc/kcsan: Add exclusions from instrumentationRohan McLure
Exclude various incompatible compilation units from KCSAN instrumentation. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230206021801.105268-2-rmclure@linux.ibm.com
2023-02-10powerpc: Skip stack validation checking alternate stacks if they are not ↵Nicholas Piggin
allocated Stack validation in early boot can just bail out of checking alternate stacks if they are not validated yet. Checking against a NULL stack could cause NULLish pointer values to be considered valid. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221216115930.2667772-5-npiggin@gmail.com
2023-02-10powerpc/64: Move paca allocation to early_setup()Nicholas Piggin
The early paca and boot cpuid dance is complicated and currently does not quite work as expected for boot cpuid != 0 cases. early_init_devtree() currently allocates the paca_ptrs and boot cpuid paca, but until that returns and early_setup() calls setup_paca(), this thread is currently still executing with smp_processor_id() == 0. One problem this causes is the paca_ptrs[smp_processor_id()] pointer is poisoned, so valid_emergency_stack() (any backtrace) and any similar users will crash. Another is that the hardware id which is set here will not be returned by get_hard_smp_processor_id(smp_processor_id()), but it would work correctly for boot_cpuid == 0, which could lead to difficult to reproduce or find bugs. The hard id does not seem to be used by the rest of early_init_devtree(), it just looks like all this code might have been put here to allocate somewhere to store boot CPU hardware id while scanning the devtree. Rearrange things so the hwid is put in a global variable like boot_cpuid, and do all the paca allocation and boot paca setup in the 64-bit early_setup() after we have everything ready to go. The paca_ptrs[0] re-poisoning code in early_setup does not seem to have ever worked, because paca_ptrs[0] was never not-poisoned when boot_cpuid is not 0. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Fix build error on 32-bit] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221216115930.2667772-4-npiggin@gmail.com
2023-02-10powerpc/64: Fix task_cpu in early boot when booting non-zero cpuidNicholas Piggin
powerpc/64 can boot on a non-zero SMP processor id. Initially, the boot CPU is said to be "assumed to be 0" until early_init_devtree() discovers the id from the device tree. That is not a good description because the assumption can be wrong and that has to be handled, the better description is that 0 is used as a placeholder, and things are fixed after the real id is discovered. smp_processor_id() is set to the boot cpuid, but task_cpu(current) is not, which causes the smp_processor_id() == task_cpu(current) invariant to be broken until init_idle() in sched_init(). This is quite fragile and could lead to subtle bugs in future. One bug is that validate_sp_size uses task_cpu() to get the process stack, so any stack trace from the booting CPU between early_init_devtree() and sched_init() will have problems. Early on paca_ptrs[0] will be poisoned, so that can cause machine checks dereferencing that memory in real mode. Later, validating the current stack pointer against the idle task of a different secondary will probably cause no stack trace to be printed. Fix this by setting thread_info->cpu right after smp_processor_id() is set to the boot cpuid. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Fix SMP=n build as reported by sfr] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221216115930.2667772-3-npiggin@gmail.com
2023-02-10powerpc/64s: Fix stress_hpt memblock alloc alignmentNicholas Piggin
The stress_hpt memblock allocation did not pass in an alignment, which causes a stack dump in early boot (that I missed, oops). Fixes: 6b34a099faa1 ("powerpc/64s/hash: add stress_hpt kernel boot option to increase hash faults") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221216115930.2667772-2-npiggin@gmail.com
2023-02-10powerpc/64e: Simplify address calculation in secondary hold loopNicholas Piggin
As the earlier comment explains, __secondary_hold_spinloop does not have to be accessed at its virtual address, slightly simplifying code. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230203113858.1152093-4-npiggin@gmail.com
2023-02-10powerpc/64s: Refactor initialisation after promNicholas Piggin
Move some basic Book3S initialisation after prom to a function similar to what Book3E looks like. Book3E returns from this function at the virtual address mapping, and Book3S will do the same in a later change, so making them look similar helps with that. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230203113858.1152093-3-npiggin@gmail.com
2023-02-10crypto: powerpc - Use address generation helper for asmNicholas Piggin
Replace open-coded toc-relative address calculation with helper macros, commit dab3b8f4fd09 ("powerpc/64: asm use consistent global variable declaration and access") made similar conversions already but missed this one. This allows data addressing model to be changed more easily. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230203113858.1152093-2-npiggin@gmail.com
2023-02-10powerpc/powernv/ioda: Skip unallocated resources when mapping to PEFrederic Barrat
pnv_ioda_setup_pe_res() calls opal to map a resource with a PE. However, the code assumes the resource is allocated and it uses the resource address to find out the segment(s) which need to be mapped to the PE. In the unlikely case where the resource hasn't been allocated, the computation for the segment number is garbage, which can lead to invalid memory access and potentially a kernel crash, such as: [ ] pci_bus 0002:02: Configuring PE for bus [ ] pci 0002:02 : [PE# fc] Secondary bus 0x0000000000000002..0x0000000000000002 associated with PE#fc [ ] BUG: Kernel NULL pointer dereference on write at 0x00000000 [ ] Faulting instruction address: 0xc00000000005eac4 [ ] Oops: Kernel access of bad area, sig: 7 [#1] [ ] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV [ ] Modules linked in: [ ] CPU: 12 PID: 1 Comm: swapper/20 Not tainted 5.10.50-openpower1 #2 [ ] NIP: c00000000005eac4 LR: c00000000005ea44 CTR: 0000000030061b9c [ ] REGS: c000200007383650 TRAP: 0300 Not tainted (5.10.50-openpower1) [ ] MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 44000224 XER: 20040000 [ ] CFAR: c00000000005eaa0 DAR: 0000000000000000 DSISR: 02080000 IRQMASK: 0 [ ] GPR00: c00000000005dd98 c0002000073838e0 c00000000185de00 c000200fff018960 [ ] GPR04: 00000000000000fc 0000000000000003 0000000000000000 0000000000000000 [ ] GPR08: 0000000000000000 0000000000000000 0000000000000000 9000000000001033 [ ] GPR12: 0000000031cb0000 c000000ffffe6a80 c000000000010a58 0000000000000000 [ ] GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ ] GPR20: 0000000000000000 0000000000000000 0000000000000000 c00000000711e200 [ ] GPR24: 0000000000000100 c000200009501120 c00020000cee2800 00000000000003ff [ ] GPR28: c000200fff018960 0000000000000000 c000200ffcb7fd00 0000000000000000 [ ] NIP [c00000000005eac4] pnv_ioda_setup_pe_res+0x94/0x1a0 [ ] LR [c00000000005ea44] pnv_ioda_setup_pe_res+0x14/0x1a0 [ ] Call Trace: [ ] [c0002000073838e0] [c00000000005eb98] pnv_ioda_setup_pe_res+0x168/0x1a0 (unreliable) [ ] [c000200007383970] [c00000000005dd98] pnv_pci_ioda_dma_dev_setup+0x43c/0x970 [ ] [c000200007383a60] [c000000000032cdc] pcibios_bus_add_device+0x78/0x18c [ ] [c000200007383aa0] [c00000000028f2bc] pci_bus_add_device+0x28/0xbc [ ] [c000200007383b10] [c00000000028f3a0] pci_bus_add_devices+0x50/0x7c [ ] [c000200007383b50] [c00000000028f3c4] pci_bus_add_devices+0x74/0x7c [ ] [c000200007383b90] [c00000000028f3c4] pci_bus_add_devices+0x74/0x7c [ ] [c000200007383bd0] [c00000000069ad0c] pcibios_init+0xf0/0x104 [ ] [c000200007383c50] [c0000000000106d8] do_one_initcall+0x84/0x1c4 [ ] [c000200007383d20] [c0000000006910b8] kernel_init_freeable+0x264/0x268 [ ] [c000200007383dc0] [c000000000010a68] kernel_init+0x18/0x138 [ ] [c000200007383e20] [c00000000000cbfc] ret_from_kernel_thread+0x5c/0x80 [ ] Instruction dump: [ ] 7f89e840 409d000c 7fbbf840 409c000c 38210090 4848f448 809c002c e95e0120 [ ] 7ba91764 38a00003 57a7043e 38c00000 <7c8a492e> 5484043e e87e0018 4bff23bd Hitting the problem is not that easy. It was seen with a (semi-bogus) PCI device with a class code of 0. The generic PCI framework doesn't allocate resources in such a case. The patch is simply skipping resources which are still flagged with IORESOURCE_UNSET. We don't have the problem with 64-bit mem resources, as the address of the resource is checked to be within the range of the 64-bit mmio window. See pnv_ioda_reserve_dev_m64_pe() and pnv_pci_is_m64(). Reported-by: Andrew Jeffery <andrew@aj.id.au> Fixes: 23e79425fe7c ("powerpc/powernv: Simplify pnv_ioda_setup_pe_seg()") Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230120093215.19496-1-fbarrat@linux.ibm.com
2023-02-10powerpc/32: select HAVE_VIRT_CPU_ACCOUNTING_GENNicholas Piggin
cputime_t is no longer a type, so VIRT_CPU_ACCOUNTING_GEN does not have any affect on the type for 32-bit architectures, so there is no reason it can't be supported. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230121095805.2823731-4-npiggin@gmail.com
2023-02-10powerpc/32: implement HAVE_CONTEXT_TRACKING_USER supportNicholas Piggin
Context tracking involves tracking user, kernel, guest switches. 32-bit shares interrupt and syscall entry and exit code (and context tracking calls) with 64-bit, and KVM can not be selected if CONTEXT_TRACKING_USER is enabled, so context tracking can be enabled for 32-bit. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230121095805.2823731-3-npiggin@gmail.com
2023-02-10powerpc: Consolidate 32-bit and 64-bit interrupt_enter_prepareNicholas Piggin
There are two separeate implementations for 32-bit and 64-bit which mostly do the same thing. Consolidating on one implementation ends up being smaller and simpler, there is just irq soft-mask reconcile that is specific to 64-bit. There should be no real functional change with this patch, but it does make the context tracking calls necessary for 32-bit to support context tracking. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230121095805.2823731-2-npiggin@gmail.com
2023-02-10powerpc/hv-24x7: Fix pvr check when setting interface versionKajol Jain
Commit ec3eb9d941a9 ("powerpc/perf: Use PVR rather than oprofile field to determine CPU version") added usage of pvr value instead of oprofile field to determine the platform. In hv-24x7 pmu driver code, pvr check uses PVR_POWER8 when assigning the interface version for power8 platform. But power8 can also have other pvr values like PVR_POWER8E and PVR_POWER8NVL. Hence the interface version won't be set properly incase of PVR_POWER8E and PVR_POWER8NVL. Fix this issue by adding the checks for PVR_POWER8E and PVR_POWER8NVL as well. Fixes: ec3eb9d941a9 ("powerpc/perf: Use PVR rather than oprofile field to determine CPU version") Reported-by: Sachin Sant <sachinp@linux.ibm.com> Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Tested-by: Sachin Sant <sachinp@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230131184804.220756-1-kjain@linux.ibm.com
2023-02-10powerpc/bpf/32: perform three operands ALU operationsChristophe Leroy
When an ALU instruction is preceded by a MOV instruction that just moves a source register into the destination register of the ALU, replace that MOV+ALU instructions by an ALU operation taking the source of the MOV as second source instead of using its destination. Before the change, code could look like the following, with superfluous separate register move (mr) instructions. 70: 7f c6 f3 78 mr r6,r30 74: 7f a5 eb 78 mr r5,r29 78: 30 c6 ff f4 addic r6,r6,-12 7c: 7c a5 01 d4 addme r5,r5 With this commit, addition instructions take r30 and r29 directly. 70: 30 de ff f4 addic r6,r30,-12 74: 7c bd 01 d4 addme r5,r29 Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b6719beaf01f9dcbcdbb787ef67c4a2f8e3a4cb6.1675245773.git.christophe.leroy@csgroup.eu
2023-02-10powerpc/bpf/32: introduce a second source register for ALU operationsChristophe Leroy
At the time being, all ALU operation are performed with same L-source and destination, requiring the L-source to be moved into destination via a separate register move, like: 70: 7f c6 f3 78 mr r6,r30 74: 7f a5 eb 78 mr r5,r29 78: 30 c6 ff f4 addic r6,r6,-12 7c: 7c a5 01 d4 addme r5,r5 Introduce a second source register to all ALU operations. For the time being that second source register is made equal to the destination register. That change will allow, via following patch, to optimise the generated code as: 70: 30 de ff f4 addic r6,r30,-12 74: 7c bd 01 d4 addme r5,r29 Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d5aaaba50d9d6b4a0e9f0cd4a5e34101aca1e247.1675245773.git.christophe.leroy@csgroup.eu
2023-02-10powerpc/bpf/32: Optimise some particular const operationsChristophe Leroy
Simplify multiplications and divisions with constants when the constant is 1 or -1. When the constant is a power of 2, replace them by bit shits. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e53b1f4a4150ec6cabcaeeef82bf9c361b5f9204.1675245773.git.christophe.leroy@csgroup.eu
2023-02-10powerpc/bpf: Only pad length-variable code at initial passChristophe Leroy
Now that two real additional passes are performed in case of extra pass requested by BPF core, padding is not needed anymore except during initial pass done before memory allocation to count maximum possible program size. So, only do the padding when 'image' is NULL. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/921851d6577badc1e6b08b270a0ced80a6a26d03.1675245773.git.christophe.leroy@csgroup.eu
2023-02-10powerpc/bpf: Perform complete extra passes to update addressesChristophe Leroy
BPF core calls the jit compiler again for an extra pass in order to properly set subprog addresses. Unlike other architectures, powerpc only updates the addresses during that extra pass. It means that holes must have been left in the code in order to enable the maximum possible instruction size. In order to avoid waste of space, and waste of CPU time on powerpc processors on which the NOP instruction is not 0-cycle, perform two real additional passes. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d484a4ac95949ff55fc4344b674e7c0d3ddbfcd5.1675245773.git.christophe.leroy@csgroup.eu
2023-02-10powerpc/bpf/32: BPF prog is never called with more than one argChristophe Leroy
BPF progs are never called with more than one argument, plus the tail call count as a second argument when needed. So, no need to retrieve 9th and 10th argument (5th 64 bits argument) from the stack in prologue. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/89a200fb45048601475c092c5775294dee3886de.1675245773.git.christophe.leroy@csgroup.eu