summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2025-04-10svsm: Add header with SVSM_VTPM_CMD helpersStefano Garzarella
Add helpers for the SVSM_VTPM_CMD calls used by the vTPM protocol defined by the AMD SVSM spec [1]. The vTPM protocol follows the Official TPM 2.0 Reference Implementation (originally by Microsoft, now part of the TCG) simulator protocol. [1] "Secure VM Service Module for SEV-SNP Guests" Publication # 58019 Revision: 1.00 Co-developed-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Co-developed-by: Claudio Carvalho <cclaudio@linux.ibm.com> Signed-off-by: Claudio Carvalho <cclaudio@linux.ibm.com> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Link: https://lore.kernel.org/r/20250403100943.120738-3-sgarzare@redhat.com
2025-04-10Merge tag 'gpio-fixes-for-v6.15-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux Pull gpio fixes from Bartosz Golaszewski: - fix resource handling in gpio-tegra186 - fix wakeup source leaks in gpio-mpc8xxx and gpio-zynq - fix minor issues with some GPIO OF quirks - deprecate GPIOD_FLAGS_BIT_NONEXCLUSIVE and devm_gpiod_unhinge() symbols and add a TODO task to track replacing them with a better solution * tag 'gpio-fixes-for-v6.15-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux: gpiolib: of: Move Atmel HSMCI quirk up out of the regulator comment gpiolib: of: Fix the choice for Ingenic NAND quirk gpio: zynq: Fix wakeup source leaks on device unbind gpio: mpc8xxx: Fix wakeup source leaks on device unbind gpio: TODO: track the removal of regulator-related workarounds MAINTAINERS: add more keywords for the GPIO subsystem entry gpio: deprecate devm_gpiod_unhinge() gpio: deprecate the GPIOD_FLAGS_BIT_NONEXCLUSIVE flag gpio: tegra186: fix resource handling in ACPI probe path
2025-04-10Merge tag 'mtd/fixes-for-6.15-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux Pull mtd fixes from Miquel Raynal: "Two important fixes: the build of the SPI NAND layer with old GCC versions as well as the fix of the Qpic Makefile which was wrong in the first place. There are also two smaller fixes about a missing error and status check" * tag 'mtd/fixes-for-6.15-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: mtd: spinand: Fix build with gcc < 7.5 mtd: rawnand: Add status chack in r852_ready() mtd: inftlcore: Add error check for inftl_read_oob() mtd: nand: Drop explicit test for built-in CONFIG_SPI_QPIC_SNAND
2025-04-10firmware: cs_dsp: test_bin_error: Fix uninitialized data used as fw versionRichard Fitzgerald
Call cs_dsp_mock_xm_header_get_fw_version() to get the firmware version from the dummy XM header data in cs_dsp_bin_err_test_common_init(). Make the same change to cs_dsp_bin_test_common_init() and remove the cs_dsp_mock_xm_header_get_fw_version_from_regmap() function. The code in cs_dsp_test_bin.c was correctly calling cs_dsp_mock_xm_header_get_fw_version_from_regmap() to fetch the fw version from a dummy header it wrote to XM registers. However in cs_dsp_test_bin_error.c the test doesn't stuff a dummy header into XM, it populates it the normal way using a wmfw file. It should have called cs_dsp_mock_xm_header_get_fw_version() to get the data from its blob buffer, but was calling cs_dsp_mock_xm_header_get_fw_version_from_regmap(). As nothing had been written to the registers this returned the value of uninitialized data. The only other use of cs_dsp_mock_xm_header_get_fw_version_from_regmap() was cs_dsp_test_bin.c, but it doesn't need to use it. It already has a blob buffer containing the dummy XM header so it can use cs_dsp_mock_xm_header_get_fw_version() to read from that. Fixes: cd8c058499b6 ("firmware: cs_dsp: Add KUnit testing of bin error cases") Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com> Link: https://patch.msgid.link/20250410132129.1312541-1-rf@opensource.cirrus.com Signed-off-by: Mark Brown <broonie@kernel.org>
2025-04-10phy: exynos5-usbdrd: add exynos7870 USBDRD supportKaustabh Chakraborty
Implement support for Exynos7870 USB DRD on top of the existing exynos5-usbdrd driver. Exynos7870 has a single USB 2.0 DRD PHY controller and no 3.0 PHYs. Thus, it only supports the UTMI interface. Moreover, the PMU register offset for enabling the PHY controller is different for SoCs such as Exynos7870, where BIT(0) is for the 3.0 PHY and BIT(1) is for the 2.0 PHY. The phy_isol function for Exynos7870 uses the appropriate register offsets. Signed-off-by: Kaustabh Chakraborty <kauschluss@disroot.org> Link: https://lore.kernel.org/r/20250410-exynos7870-usbphy-v2-3-2eb005987455@disroot.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
2025-04-10cpufreq: Drop unused cpufreq_get_policy()Rafael J. Wysocki
A recent change has introduced a bug into cpufreq_get_policy(), but this function is not used, so it's better to drop it altogether. Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Link: https://patch.msgid.link/2802770.mvXUDI8C0e@rjwysocki.net
2025-04-10fbdev: Remove constants of unused eventsThomas Zimmermann
The constants FB_EVENT_MODE_CHANGE and FB_EVENT_BLANK are unused. Remove them from the header file. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Acked-by: Simona Vetter <simona.vetter@ffwll.ch> Link: https://lore.kernel.org/r/20250321095517.313713-12-tzimmermann@suse.de Signed-off-by: Lee Jones <lee@kernel.org>
2025-04-10leds: backlight trigger: Replace fb events with a dedicated function callThomas Zimmermann
Remove support for fb events from the led backlight trigger. Provide the helper ledtrig_backlight_blank() instead. Call it from fbdev to inform the trigger of changes to a display's blank state. Fbdev maintains a list of all installed notifiers. Instead of the fbdev notifiers, maintain an internal list of led backlight triggers. v3: - export ledtrig_backlight_blank() v2: - maintain global list of led backlight triggers (Lee) - avoid IS_REACHABLE() in source file (Lee) - notify on changes to blank state instead of display state - use lock guards - initialize led list and list mutex Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Acked-by: Simona Vetter <simona.vetter@ffwll.ch> Link: https://lore.kernel.org/r/20250321095517.313713-11-tzimmermann@suse.de Signed-off-by: Lee Jones <lee@kernel.org>
2025-04-10backlight: lcd: Replace fb events with a dedicated function callThomas Zimmermann
Remove support for fb events from the lcd subsystem. Provide the helper lcd_notify_blank_all() instead. In fbdev, call lcd_notify_blank_all() to inform the lcd subsystem of changes to a display's blank state. Fbdev maintains a list of all installed notifiers. Instead of fbdev notifiers, maintain an internal list of lcd devices. v3: - export lcd_notify_mode_change_all() (kernel test robot) v2: - maintain global list of lcd devices - avoid IS_REACHABLE() in source file - use lock guards - initialize lcd list and list mutex Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Acked-by: Simona Vetter <simona.vetter@ffwll.ch> Reviewed-by: "Daniel Thompson (RISCstar)" <danielt@kernel.org> Link: https://lore.kernel.org/r/20250321095517.313713-9-tzimmermann@suse.de Signed-off-by: Lee Jones <lee@kernel.org>
2025-04-10backlight: Replace fb events with a dedicated function callThomas Zimmermann
Remove support for fb events from backlight subsystem. Provide the helper backlight_notify_blank_all() instead. Also export the existing helper backlight_notify_blank() to update a single backlight device. In fbdev, call either helper to inform the backlight subsystem of changes to a display's blank state. If the framebuffer device has a specific backlight, only update this one; otherwise update all. v4: - protect blacklight declarations with IS_REACHABLE() (kernel test robot) v3: - declare empty fb_bl_notify_blank() as static inline (kernel test robot) Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Acked-by: Simona Vetter <simona.vetter@ffwll.ch> Reviewed-by: "Daniel Thompson (RISCstar)" <danielt@kernel.org> Link: https://lore.kernel.org/r/20250321095517.313713-7-tzimmermann@suse.de Signed-off-by: Lee Jones <lee@kernel.org>
2025-04-10backlight: Implement fbdev tracking with blank state from eventThomas Zimmermann
Look at the blank state provided by FB_EVENT_BLANK to determine whether to enable or disable a backlight. Remove the tracking fields from struct backlight_device. Tracking requires three variables, fb_on, prev_fb_on and the backlight's use_count. If fb_on is true, the display has been unblanked. The backlight needs to be enabled if the display was blanked before (i.e., prev_fb_on is false) or if use_count is still at 0. If fb_on is false, the display has been blanked. In this case, the backlight has to be disabled was unblanked before and the backlight's use_count is greater than 0. This change removes fbdev state tracking from blacklight. All the backlight requires it its own use counter and information about changes to the display. Removing fbdev internals makes backlight drivers easier to integrate into other display drivers, such as DRM. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Reviewed-by: "Daniel Thompson (RISCstar)" <danielt@kernel.org> Acked-by: Simona Vetter <simona.vetter@ffwll.ch> Link: https://lore.kernel.org/r/20250321095517.313713-5-tzimmermann@suse.de Signed-off-by: Lee Jones <lee@kernel.org>
2025-04-10fbdev: Track display blanking stateThomas Zimmermann
Store the display's blank status in struct fb_info.blank and track it in fb_blank(). As an extra, the status is now available from the sysfs blank attribute. Support for blanking is optional. Therefore framebuffer_alloc() initializes the state to FB_BLANK_UNBLANK (i.e., the display is on). If the fb_blank callback has been set, register_framebuffer() sets the state to FB_BLANK_POWERDOWN. On the first modeset, the call to fb_blank() will update it to _UNBLANK. This is important, as listeners to FB_EVENT_BLANK will now see the display being switched on. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Acked-by: Simona Vetter <simona.vetter@ffwll.ch> Link: https://lore.kernel.org/r/20250321095517.313713-3-tzimmermann@suse.de Signed-off-by: Lee Jones <lee@kernel.org>
2025-04-10device property: Split fwnode_get_child_node_count()Andy Shevchenko
The new helper is introduced to allow counting the child firmware nodes of their parent without requiring a device to be passed. This also makes the fwnode and device property API more symmetrical with the rest. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: "Rafael J. Wysocki" <rafael@kernel.org> Reviewed-by: Sakari Ailus <sakari.ailus@linux.intel.com> Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20250310150835.3139322-2-andriy.shevchenko@linux.intel.com Signed-off-by: Lee Jones <lee@kernel.org>
2025-04-10mm/kmap: Add kmap_local_page_try_from_panic()Jocelyn Falempe
kmap_local_page() can be unsafe to call from a panic handler, if CONFIG_HIGHMEM is set, and the page is in the highmem zone. So add kmap_local_page_try_from_panic() to handle this case. Suggested-by: Simona Vetter <simona.vetter@ffwll.ch> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com> Link: https://lore.kernel.org/r/20250407140138.162383-2-jfalempe@redhat.com
2025-04-09net: remove cpu stall in txq_trans_update()Eric Dumazet
txq_trans_update() currently uses txq->xmit_lock_owner to conditionally update txq->trans_start. For regular devices, txq->xmit_lock_owner is updated from HARD_TX_LOCK() and HARD_TX_UNLOCK(), and this apparently causes cpu stalls. Using dev->lltx, which sits in a read-mostly cache-line, and already used in HARD_TX_LOCK() and HARD_TX_UNLOCK() helps cpu prediction. On an AMD EPYC 7B12 dual socket server, tcp_rr with 128 threads and 30,000 flows gets a 5 % increase in throughput. As explained in commit 95ecba62e2fd ("net: fix races in netdev_tx_sent_queue()/dev_watchdog()") I am planning to no longer update txq->trans_start in the fast path in a followup patch. Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20250408202742.2145516-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-09xdp: double protect netdev->xdp_flags with netdev->lockJakub Kicinski
Protect xdp_features with netdev->lock. This way pure readers no longer have to take rtnl_lock to access the field. This includes calling NETDEV_XDP_FEAT_CHANGE under the lock. Looks like that's fine for bonding, the only "real" listener, it's the same as ethtool feature change. In terms of normal drivers - only GVE need special consideration (other drivers don't use instance lock or don't support XDP). It calls xdp_set_features_flag() helper from gve_init_priv() which in turn is called from gve_reset_recovery() (locked), or prior to netdev registration. So switch to _locked. Reviewed-by: Joe Damato <jdamato@fastly.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Acked-by: Harshitha Ramamurthy <hramamurthy@google.com> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://patch.msgid.link/20250408195956.412733-6-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-09net: designate XSK pool pointers in queues as "ops protected"Jakub Kicinski
Read accesses go via xsk_get_pool_from_qid(), the call coming from the core and gve look safe (other "ops locked" drivers don't support XSK). Write accesses go via xsk_reg_pool_at_qid() and xsk_clear_pool_at_qid(). Former is already under the ops lock, latter is not (both coming from the workqueue via xp_clear_dev() and NETDEV_UNREGISTER via xsk_notifier()). Acked-by: Stanislav Fomichev <sdf@fomichev.me> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250408195956.412733-3-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-09net: avoid potential race between netdev_get_by_index_lock() and netns switchJakub Kicinski
netdev_get_by_index_lock() performs following steps: rcu_lock(); dev = lookup(netns, ifindex); dev_get(dev); rcu_unlock(); [... lock & validate the dev ...] return dev Validation right now only checks if the device is registered but since the lookup is netns-aware we must also protect against the device switching netns right after we dropped the RCU lock. Otherwise the caller in netns1 may get a pointer to a device which has just switched to netns2. We can't hold the lock for the entire netns change process (because of the NETDEV_UNREGISTER notifier), and there's no existing marking to indicate that the netns is unlisted because of netns move, so add one. AFAIU none of the existing netdev_get_by_index_lock() callers can suffer from this problem (NAPI code double checks the netns membership and other callers are either under rtnl_lock or not ns-sensitive), so this patch does not have to be treated as a fix. Reviewed-by: Joe Damato <jdamato@fastly.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250408195956.412733-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-09PCI: Remove pcim_iounmap_regions()Philipp Stanner
All users of the deprecated function pcim_iounmap_regions() have been ported by now. Remove it. Signed-off-by: Philipp Stanner <pstanner@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Zijun Hu <quic_zijuhu@quicinc.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://patch.msgid.link/20250327110707.20025-4-phasta@kernel.org
2025-04-09cpufreq: Pass policy pointer to ->update_limits()Rafael J. Wysocki
Since cpufreq_update_limits() obtains a cpufreq policy pointer for the given CPU and reference counts the corresponding policy object, it may as well pass the policy pointer to the cpufreq driver's ->update_limits() callback which allows that callback to avoid invoking cpufreq_cpu_get() for the same CPU. Accordingly, redefine ->update_limits() to take a policy pointer instead of a CPU number and update both drivers implementing it, intel_pstate and amd-pstate, as needed. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Acked-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Link: https://patch.msgid.link/8560367.NyiUUSuA9g@rjwysocki.net
2025-04-09cpufreq: Drop cpufreq_cpu_acquire() and cpufreq_cpu_release()Rafael J. Wysocki
Since cpufreq_cpu_acquire() and cpufreq_cpu_release() have no more users in the tree, remove them. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Link: https://patch.msgid.link/3880470.kQq0lBPeGt@rjwysocki.net
2025-04-09cpufreq: Add and use cpufreq policy locking guardsRafael J. Wysocki
Introduce "read" and "write" locking guards for cpufreq policies and use them where applicable in the cpufreq core. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Link: https://patch.msgid.link/8518682.T7Z3S40VBb@rjwysocki.net
2025-04-09hrtimer: Add missing ACCESS_PRIVATE() for hrtimer::functionNam Cao
The "function" field of struct hrtimer has been changed to private, but two instances have not been converted to use ACCESS_PRIVATE(). Convert them to use ACCESS_PRIVATE(). Fixes: 04257da0c99c ("hrtimers: Make callback function pointer private") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Nam Cao <namcao@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250408103854.1851093-1-namcao@linutronix.de Closes: https://lore.kernel.org/oe-kbuild-all/202504071931.vOVl13tt-lkp@intel.com/ Closes: https://lore.kernel.org/oe-kbuild-all/202504072155.5UAZjYGU-lkp@intel.com/
2025-04-09genirq/msi: Rename msi_[un]lock_descs()Thomas Gleixner
Now that all abuse is gone and the legit users are converted to guard(msi_descs_lock), rename the lock functions and document them as internal. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huwei.com> Link: https://lore.kernel.org/all/20250319105506.864699741@linutronix.de
2025-04-09genirq/msi: Use lock guards for MSI descriptor lockingThomas Gleixner
Provide a lock guard for MSI descriptor locking and update the core code accordingly. No functional change intended. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/all/20250319105506.144672678@linutronix.de
2025-04-09cleanup: Provide retain_and_null_ptr()Thomas Gleixner
In cases where an allocation is consumed by another function, the allocation needs to be retained on success or freed on failure. The code pattern is usually: struct foo *f = kzalloc(sizeof(*f), GFP_KERNEL); struct bar *b; ,,, // Initialize f ... if (ret) goto free; ... bar = bar_create(f); if (!bar) { ret = -ENOMEM; goto free; } ... return 0; free: kfree(f); return ret; This prevents using __free(kfree) on @f because there is no canonical way to tell the cleanup code that the allocation should not be freed. Abusing no_free_ptr() by force ignoring the return value is not really a sensible option either. Provide an explicit macro retain_and_null_ptr(), which NULLs the cleanup pointer. That makes it easy to analyze and reason about. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: James Bottomley <James.Bottomley@HansenPartnership.com> Link: https://lore.kernel.org/all/20250319105506.083538907@linutronix.de
2025-04-09ACPI: Add missing prototype for non CONFIG_SUSPEND/CONFIG_X86 caseMario Limonciello
acpi_register_lps0_dev() and acpi_unregister_lps0_dev() may be used in drivers that don't require CONFIG_SUSPEND or compile on !X86. Add prototypes for those cases. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202502191627.fRgoBwcZ-lkp@intel.com/ Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Link: https://patch.msgid.link/20250407183656.1503446-1-superm1@kernel.org Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-04-09stack_tracer: move sysctl registration to kernel/trace/trace_stack.cJoel Granados
Move stack_tracer_enabled into trace_stack_sysctl_table. This is part of a greater effort to move ctl tables into their respective subsystems which will reduce the merge conflicts in kernel/sysctl.c. Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Joel Granados <joel.granados@kernel.org>
2025-04-09tracing: Move trace sysctls into trace.cJoel Granados
Move trace ctl tables into their own const array in kernel/trace/trace.c. The sysctl table register is called with subsys_initcall placing if after its original place in proc_root_init. This is part of a greater effort to move ctl tables into their respective subsystems which will reduce the merge conflicts in kernel/sysctl.c. Signed-off-by: Joel Granados <joel.granados@kernel.org> Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-04-09Merge tag 'gpiod-is-equal-for-v6.16-rc1' of ↵Bartosz Golaszewski
gitolite.kernel.org:pub/scm/linux/kernel/git/brgl/linux into gpio/for-next Immutable tag for the regulator tree to pull from gpio: provide gpiod_is_equal()
2025-04-09gpio: provide gpiod_is_equal()Bartosz Golaszewski
There are users in the kernel that directly compare raw GPIO descriptor pointers in order to determine whether they refer to the same physical GPIO pin. This accidentally works like this but is not guaranteed by any API contract. Let's provide a comparator function that hides the actual logic. Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20250407-gpiod-is-equal-v1-1-7d85f568ae6e@linaro.org Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2025-04-08net: Drop unused @sk of __skb_try_recv_from_queue()Michal Luczaj
__skb_try_recv_from_queue() deals with a queue, @sk is not used since commit e427cad6eee4 ("net: datagram: drop 'destructor' argument from several helpers"). Remove sk from function parameters, adapt callers. No functional change intended. Signed-off-by: Michal Luczaj <mhal@rbox.co> Reviewed-by: Joe Damato <jdamato@fastly.com> Link: https://patch.msgid.link/20250407-cleanup-drop-param-sk-v1-1-cd076979afac@rbox.co Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-08udp_tunnel: create a fastpath GRO lookup.Paolo Abeni
Most UDP tunnels bind a socket to a local port, with ANY address, no peer and no interface index specified. Additionally it's quite common to have a single tunnel device per namespace. Track in each namespace the UDP tunnel socket respecting the above. When only a single one is present, store a reference in the netns. When such reference is not NULL, UDP tunnel GRO lookup just need to match the incoming packet destination port vs the socket local port. The tunnel socket never sets the reuse[port] flag[s]. When bound to no address and interface, no other socket can exist in the same netns matching the specified local port. Matching packets with non-local destination addresses will be aggregated, and eventually segmented as needed - no behavior changes intended. Restrict the optimization to kernel sockets only: it covers all the relevant use-cases, and user-space owned sockets could be disconnected and rebound after setup_udp_tunnel_sock(), breaking the uniqueness assumption Note that the UDP tunnel socket reference is stored into struct netns_ipv4 for both IPv4 and IPv6 tunnels. That is intentional to keep all the fastpath-related netns fields in the same struct and allow cacheline-based optimization. Currently both the IPv4 and IPv6 socket pointer share the same cacheline as the `udp_table` field. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/41d16bc8d1257d567f9344c445b4ae0b4a91ede4.1744040675.git.pabeni@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-08Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "ARM: - Rework heuristics for resolving the fault IPA (HPFAR_EL2 v. re-walk stage-1 page tables) to align with the architecture. This avoids possibly taking an SEA at EL2 on the page table walk or using an architecturally UNKNOWN fault IPA - Use acquire/release semantics in the KVM FF-A proxy to avoid reading a stale value for the FF-A version - Fix KVM guest driver to match PV CPUID hypercall ABI - Use Inner Shareable Normal Write-Back mappings at stage-1 in KVM selftests, which is the only memory type for which atomic instructions are architecturally guaranteed to work s390: - Don't use %pK for debug printing and tracepoints x86: - Use a separate subclass when acquiring KVM's per-CPU posted interrupts wakeup lock in the scheduled out path, i.e. when adding a vCPU on the list of vCPUs to wake, to workaround a false positive deadlock. The schedule out code runs with a scheduler lock that the wakeup handler takes in the opposite order; but it does so with IRQs disabled and cannot run concurrently with a wakeup - Explicitly zero-initialize on-stack CPUID unions - Allow building irqbypass.ko as as module when kvm.ko is a module - Wrap relatively expensive sanity check with KVM_PROVE_MMU - Acquire SRCU in KVM_GET_MP_STATE to protect guest memory accesses selftests: - Add more scenarios to the MONITOR/MWAIT test - Add option to rseq test to override /dev/cpu_dma_latency - Bring list of exit reasons up to date - Cleanup Makefile to list once tests that are valid on all architectures Other: - Documentation fixes" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (26 commits) KVM: arm64: Use acquire/release to communicate FF-A version negotiation KVM: arm64: selftests: Explicitly set the page attrs to Inner-Shareable KVM: arm64: selftests: Introduce and use hardware-definition macros KVM: VMX: Use separate subclasses for PI wakeup lock to squash false positive KVM: VMX: Assert that IRQs are disabled when putting vCPU on PI wakeup list KVM: x86: Explicitly zero-initialize on-stack CPUID unions KVM: Allow building irqbypass.ko as as module when kvm.ko is a module KVM: x86/mmu: Wrap sanity check on number of TDP MMU pages with KVM_PROVE_MMU KVM: selftests: Add option to rseq test to override /dev/cpu_dma_latency KVM: x86: Acquire SRCU in KVM_GET_MP_STATE to protect guest memory accesses Documentation: kvm: remove KVM_CAP_MIPS_TE Documentation: kvm: organize capabilities in the right section Documentation: kvm: fix some definition lists Documentation: kvm: drop "Capability" heading from capabilities Documentation: kvm: give correct name for KVM_CAP_SPAPR_MULTITCE Documentation: KVM: KVM_GET_SUPPORTED_CPUID now exposes TSC_DEADLINE selftests: kvm: list once tests that are valid on all architectures selftests: kvm: bring list of exit reasons up to date selftests: kvm: revamp MONITOR/MWAIT tests KVM: arm64: Don't translate FAR if invalid/unsafe ...
2025-04-08Merge tag 'cgroup-for-6.15-rc1-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup Pull cgroup fixes from Tejun Heo: - A number of cpuset remote partition related fixes and cleanups along with selftest updates. - A change from this merge window made cgroup_rstat_updated_list() called outside cgroup_rstat_lock leading to list corruptions. Fix it by relocating the call inside the lock. * tag 'cgroup-for-6.15-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: cgroup/cpuset: Fix race between newly created partition and dying one cgroup: rstat: call cgroup_rstat_updated_list with cgroup_rstat_lock selftest/cgroup: Add a remote partition transition test to test_cpuset_prs.sh selftest/cgroup: Clean up and restructure test_cpuset_prs.sh selftest/cgroup: Update test_cpuset_prs.sh to use | as effective CPUs and state separator cgroup/cpuset: Remove unneeded goto in sched_partition_write() and rename it cgroup/cpuset: Code cleanup and comment update cgroup/cpuset: Don't allow creation of local partition over a remote one cgroup/cpuset: Remove remote_partition_check() & make update_cpumasks_hier() handle remote partition cgroup/cpuset: Fix error handling in remote_partition_disable() cgroup/cpuset: Fix incorrect isolated_cpus update in update_parent_effective_cpumask()
2025-04-08perf/x86/intel: Support auto counter reloadKan Liang
The relative rates among two or more events are useful for performance analysis, e.g., a high branch miss rate may indicate a performance issue. Usually, the samples with a relative rate that exceeds some threshold are more useful. However, the traditional sampling takes samples of events separately. To get the relative rates among two or more events, a high sample rate is required, which can bring high overhead. Many samples taken in the non-hotspot area are also dropped (useless) in the post-process. The auto counter reload (ACR) feature takes samples when the relative rate of two or more events exceeds some threshold, which provides the fine-grained information at a low cost. To support the feature, two sets of MSRs are introduced. For a given counter IA32_PMC_GPn_CTR/IA32_PMC_FXm_CTR, bit fields in the IA32_PMC_GPn_CFG_B/IA32_PMC_FXm_CFG_B MSR indicate which counter(s) can cause a reload of that counter. The reload value is stored in the IA32_PMC_GPn_CFG_C/IA32_PMC_FXm_CFG_C. The details can be found at Intel SDM (085), Volume 3, 21.9.11 Auto Counter Reload. In the hw_config(), an ACR event is specially configured, because the cause/reloadable counter mask has to be applied to the dyn_constraint. Besides the HW limit, e.g., not support perf metrics, PDist and etc, a SW limit is applied as well. ACR events in a group must be contiguous. It facilitates the later conversion from the event idx to the counter idx. Otherwise, the intel_pmu_acr_late_setup() has to traverse the whole event list again to find the "cause" event. Also, add a new flag PERF_X86_EVENT_ACR to indicate an ACR group, which is set to the group leader. The late setup() is also required for an ACR group. It's to convert the event idx to the counter idx, and saved it in hw.config1. The ACR configuration MSRs are only updated in the enable_event(). The disable_event() doesn't clear the ACR CFG register. Add acr_cfg_b/acr_cfg_c in the struct cpu_hw_events to cache the MSR values. It can avoid a MSR write if the value is not changed. Expose an acr_mask to the sysfs. The perf tool can utilize the new format to configure the relation of events in the group. The bit sequence of the acr_mask follows the events enabled order of the group. Example: Here is the snippet of the mispredict.c. Since the array has a random numbers, jumps are random and often mispredicted. The mispredicted rate depends on the compared value. For the Loop1, ~11% of all branches are mispredicted. For the Loop2, ~21% of all branches are mispredicted. main() { ... for (i = 0; i < N; i++) data[i] = rand() % 256; ... /* Loop 1 */ for (k = 0; k < 50; k++) for (i = 0; i < N; i++) if (data[i] >= 64) sum += data[i]; ... ... /* Loop 2 */ for (k = 0; k < 50; k++) for (i = 0; i < N; i++) if (data[i] >= 128) sum += data[i]; ... } Usually, a code with a high branch miss rate means a bad performance. To understand the branch miss rate of the codes, the traditional method usually samples both branches and branch-misses events. E.g., perf record -e "{cpu_atom/branch-misses/ppu, cpu_atom/branch-instructions/u}" -c 1000000 -- ./mispredict [ perf record: Woken up 4 times to write data ] [ perf record: Captured and wrote 0.925 MB perf.data (5106 samples) ] The 5106 samples are from both events and spread in both Loops. In the post-process stage, a user can know that the Loop 2 has a 21% branch miss rate. Then they can focus on the samples of branch-misses events for the Loop 2. With this patch, the user can generate the samples only when the branch miss rate > 20%. For example, perf record -e "{cpu_atom/branch-misses,period=200000,acr_mask=0x2/ppu, cpu_atom/branch-instructions,period=1000000,acr_mask=0x3/u}" -- ./mispredict (Two different periods are applied to branch-misses and branch-instructions. The ratio is set to 20%. If the branch-instructions is overflowed first, the branch-miss rate < 20%. No samples should be generated. All counters should be automatically reloaded. If the branch-misses is overflowed first, the branch-miss rate > 20%. A sample triggered by the branch-misses event should be generated. Just the counter of the branch-instructions should be automatically reloaded. The branch-misses event should only be automatically reloaded when the branch-instructions is overflowed. So the "cause" event is the branch-instructions event. The acr_mask is set to 0x2, since the event index in the group of branch-instructions is 1. The branch-instructions event is automatically reloaded no matter which events are overflowed. So the "cause" events are the branch-misses and the branch-instructions event. The acr_mask should be set to 0x3.) [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.098 MB perf.data (2498 samples) ] $perf report Percent │154: movl $0x0,-0x14(%rbp) │ ↓ jmp 1af │ for (i = j; i < N; i++) │15d: mov -0x10(%rbp),%eax │ mov %eax,-0x18(%rbp) │ ↓ jmp 1a2 │ if (data[i] >= 128) │165: mov -0x18(%rbp),%eax │ cltq │ lea 0x0(,%rax,4),%rdx │ mov -0x8(%rbp),%rax │ add %rdx,%rax │ mov (%rax),%eax │ ┌──cmp $0x7f,%eax 100.00 0.00 │ ├──jle 19e │ │sum += data[i]; The 2498 samples are all from the branch-misses events for the Loop 2. The number of samples and overhead is significantly reduced without losing any information. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Thomas Falcon <thomas.falcon@intel.com> Link: https://lkml.kernel.org/r/20250327195217.2683619-6-kan.liang@linux.intel.com
2025-04-08perf: Extend the bit width of the arch-specific flagKan Liang
The auto counter reload feature requires an event flag to indicate an auto counter reload group, which can only be scheduled on specific counters that enumerated in CPUID. However, the hw_perf_event.flags has run out on X86. Two solutions were considered to address the issue. - Currently, 20 bits are reserved for the architecture-specific flags. Only the bit 31 is used for the generic flag. There is still plenty of space left. Reserve 8 more bits for the arch-specific flags. - Add a new X86 specific hw_perf_event.flags1 to support more flags. The former is implemented. Enough room is still left in the global generic flag. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Thomas Falcon <thomas.falcon@intel.com> Link: https://lkml.kernel.org/r/20250327195217.2683619-4-kan.liang@linux.intel.com
2025-04-08perf/x86: Add dynamic constraintKan Liang
More and more features require a dynamic event constraint, e.g., branch counter logging, auto counter reload, Arch PEBS, etc. Add a generic flag, PMU_FL_DYN_CONSTRAINT, to indicate the case. It avoids keeping adding the individual flag in intel_cpuc_prepare(). Add a variable dyn_constraint in the struct hw_perf_event to track the dynamic constraint of the event. Apply it if it's updated. Apply the generic dynamic constraint for branch counter logging. Many features on and after V6 require dynamic constraint. So unconditionally set the flag for V6+. Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Thomas Falcon <thomas.falcon@intel.com> Link: https://lkml.kernel.org/r/20250327195217.2683619-2-kan.liang@linux.intel.com
2025-04-08perf: Make perf_pmu_unregister() useablePeter Zijlstra
Previously it was only safe to call perf_pmu_unregister() if there were no active events of that pmu around -- which was impossible to guarantee since it races all sorts against perf_init_event(). Rework the whole thing by: - keeping track of all events for a given pmu - 'hiding' the pmu from perf_init_event() - waiting for the appropriate (s)rcu grace periods such that all prior references to the PMU will be completed - detaching all still existing events of that pmu (see first point) and moving them to a new REVOKED state. - actually freeing the pmu data. Where notably the new REVOKED state must inhibit all event actions from reaching code that wants to use event->pmu. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Ravi Bangoria <ravi.bangoria@amd.com> Link: https://lkml.kernel.org/r/20250307193723.525402029@infradead.org
2025-04-08perf: Fix hang while freeing sigtrap eventFrederic Weisbecker
Perf can hang while freeing a sigtrap event if a related deferred signal hadn't managed to be sent before the file got closed: perf_event_overflow() task_work_add(perf_pending_task) fput() task_work_add(____fput()) task_work_run() ____fput() perf_release() perf_event_release_kernel() _free_event() perf_pending_task_sync() task_work_cancel() -> FAILED rcuwait_wait_event() Once task_work_run() is running, the list of pending callbacks is removed from the task_struct and from this point on task_work_cancel() can't remove any pending and not yet started work items, hence the task_work_cancel() failure and the hang on rcuwait_wait_event(). Task work could be changed to remove one work at a time, so a work running on the current task can always cancel a pending one, however the wait / wake design is still subject to inverted dependencies when remote targets are involved, as pictured by Oleg: T1 T2 fd = perf_event_open(pid => T2->pid); fd = perf_event_open(pid => T1->pid); close(fd) close(fd) <IRQ> <IRQ> perf_event_overflow() perf_event_overflow() task_work_add(perf_pending_task) task_work_add(perf_pending_task) </IRQ> </IRQ> fput() fput() task_work_add(____fput()) task_work_add(____fput()) task_work_run() task_work_run() ____fput() ____fput() perf_release() perf_release() perf_event_release_kernel() perf_event_release_kernel() _free_event() _free_event() perf_pending_task_sync() perf_pending_task_sync() rcuwait_wait_event() rcuwait_wait_event() Therefore the only option left is to acquire the event reference count upon queueing the perf task work and release it from the task work, just like it was done before 3a5465418f5f ("perf: Fix event leak upon exec and file release") but without the leaks it fixed. Some adjustments are necessary to make it work: * A child event might dereference its parent upon freeing. Care must be taken to release the parent last. * Some places assuming the event doesn't have any reference held and therefore can be freed right away must instead put the reference and let the reference counting to its job. Reported-by: "Yi Lai" <yi1.lai@linux.intel.com> Closes: https://lore.kernel.org/all/Zx9Losv4YcJowaP%2F@ly-workstation/ Reported-by: syzbot+3c4321e10eea460eb606@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/673adf75.050a0220.87769.0024.GAE@google.com/ Fixes: 3a5465418f5f ("perf: Fix event leak upon exec and file release") Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250304135446.18905-1-frederic@kernel.org
2025-04-08ASoC: Intel: avs: 16 channels supportMark Brown
Merge series from Cezary Rojewski <cezary.rojewski@intel.com>: Relatively small delta-wise patchset which raises max channels supported from 8 to 16. The existing limitation is software-based, not hardware based. The hardware, as per HDAudio specification, section 1.2.2, (relevant register at SDnFMT, section 3.3.41) supports the configurations for years. The avs-driver becomes the first consumer of that configuration on the Linux kernel side. Set starts off with update to string_helpers so that functionality added with parse_int_array_user() can be utilized in kernel-kernel interactions. Follow up is rasing the cap on HDAudio-library side. The format selection procedure found in the library is good-to-go as is. Everything that follows these two patches is avs-driver specific: - raise channels_max for every DAI-driver template - provide i2s_test module parameter for testing purposes. When combined with I2S loopback card, allows to test 16ch on most Intel hardware post Broadwell era - adjust TDM masks to reflect the 8 -> 16 channels change
2025-04-08Use try_lookup_noperm() instead of d_hash_and_lookup() outside of VFSNeilBrown
try_lookup_noperm() and d_hash_and_lookup() are nearly identical. The former does some validation of the name where the latter doesn't. Outside of the VFS that validation is likely valuable, and having only one exported function for this task is certainly a good idea. So make d_hash_and_lookup() local to VFS files and change all other callers to try_lookup_noperm(). Note that the arguments are swapped. Signed-off-by: NeilBrown <neilb@suse.de> Link: https://lore.kernel.org/r/20250319031545.2999807-6-neil@brown.name Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-04-08VFS: rename lookup_one_len family to lookup_noperm and remove permission checkNeilBrown
The lookup_one_len family of functions is (now) only used internally by a filesystem on itself either - in a context where permission checking is irrelevant such as by a virtual filesystem populating itself, or xfs accessing its ORPHANAGE or dquota accessing the quota file; or - in a context where a permission check (MAY_EXEC on the parent) has just been performed such as a network filesystem finding in "silly-rename" file in the same directory. This is also the context after the _parentat() functions where currently lookup_one_qstr_excl() is used. So the permission check is pointless. The name "one_len" is unhelpful in understanding the purpose of these functions and should be changed. Most of the callers pass the len as "strlen()" so using a qstr and QSTR() can simplify the code. This patch renames these functions (include lookup_positive_unlocked() which is part of the family despite the name) to have a name based on "lookup_noperm". They are changed to receive a 'struct qstr' instead of separate name and len. In a few cases the use of QSTR() results in a new call to strlen(). try_lookup_noperm() takes a pointer to a qstr instead of the whole qstr. This is consistent with d_hash_and_lookup() (which is nearly identical) and useful for lookup_noperm_unlocked(). The new lookup_noperm_common() doesn't take a qstr yet. That will be tidied up in a subsequent patch. Signed-off-by: NeilBrown <neil@brown.name> Link: https://lore.kernel.org/r/20250319031545.2999807-5-neil@brown.name Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-04-08gpio: deprecate the GPIOD_FLAGS_BIT_NONEXCLUSIVE flagBartosz Golaszewski
The non-exclusive GPIO request flag looks like a functional feature but is in fact a workaround for a corner-case that got out of hand. It should be removed so deprecate it officially so that nobody uses it anymore. Acked-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20250401-gpio-todo-remove-nonexclusive-v2-1-7c1380797b0d@linaro.org Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2025-04-08fs: predict not having to do anything in fdput()Mateusz Guzik
This matches the annotation in fdget(). Signed-off-by: Mateusz Guzik <mjguzik@gmail.com> Link: https://lore.kernel.org/20250406235806.1637000-2-mjguzik@gmail.com Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-04-07net: hold instance lock during NETDEV_CHANGEStanislav Fomichev
Cosmin reports an issue with ipv6_add_dev being called from NETDEV_CHANGE notifier: [ 3455.008776] ? ipv6_add_dev+0x370/0x620 [ 3455.010097] ipv6_find_idev+0x96/0xe0 [ 3455.010725] addrconf_add_dev+0x1e/0xa0 [ 3455.011382] addrconf_init_auto_addrs+0xb0/0x720 [ 3455.013537] addrconf_notify+0x35f/0x8d0 [ 3455.014214] notifier_call_chain+0x38/0xf0 [ 3455.014903] netdev_state_change+0x65/0x90 [ 3455.015586] linkwatch_do_dev+0x5a/0x70 [ 3455.016238] rtnl_getlink+0x241/0x3e0 [ 3455.019046] rtnetlink_rcv_msg+0x177/0x5e0 Similarly, linkwatch might get to ipv6_add_dev without ops lock: [ 3456.656261] ? ipv6_add_dev+0x370/0x620 [ 3456.660039] ipv6_find_idev+0x96/0xe0 [ 3456.660445] addrconf_add_dev+0x1e/0xa0 [ 3456.660861] addrconf_init_auto_addrs+0xb0/0x720 [ 3456.661803] addrconf_notify+0x35f/0x8d0 [ 3456.662236] notifier_call_chain+0x38/0xf0 [ 3456.662676] netdev_state_change+0x65/0x90 [ 3456.663112] linkwatch_do_dev+0x5a/0x70 [ 3456.663529] __linkwatch_run_queue+0xeb/0x200 [ 3456.663990] linkwatch_event+0x21/0x30 [ 3456.664399] process_one_work+0x211/0x610 [ 3456.664828] worker_thread+0x1cc/0x380 [ 3456.665691] kthread+0xf4/0x210 Reclassify NETDEV_CHANGE as a notifier that consistently runs under the instance lock. Link: https://lore.kernel.org/netdev/aac073de8beec3e531c86c101b274d434741c28e.camel@nvidia.com/ Reported-by: Cosmin Ratiu <cratiu@nvidia.com> Tested-by: Cosmin Ratiu <cratiu@nvidia.com> Fixes: ad7c7b2172c3 ("net: hold netdev instance lock during sysfs operations") Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250404161122.3907628-1-sdf@fomichev.me Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-07lib/string_helpers: Introduce parse_int_array()Cezary Rojewski
Existing parse_inte_array_user() works with __user buffers only. Separate array parsing from __user bits so the functionality can be utilized with kernel buffers too. Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com> Signed-off-by: Cezary Rojewski <cezary.rojewski@intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://patch.msgid.link/20250404090337.3564117-2-cezary.rojewski@intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2025-04-07ASoC: Intel: avs: PTL-based platforms supportCezary Rojewski
Define handlers specific to ACE platforms, that Frisco Lake (FCL), a PantherLake (PTL)-based platform, is founded upon. Most operations are still inherited from their predecessors with the major difference being AudioDSP cores management - replaced by DSP-domain power management. Software has to ensure the DSP domain is both powered on and its power-gating disabled before it can be utilized for streaming. Reviewed-by: Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com> Signed-off-by: Cezary Rojewski <cezary.rojewski@intel.com> Acked-by: Liam Girdwood <liam.r.girdwood@linux.intel.com> Link: https://patch.msgid.link/20250407112352.3720779-6-cezary.rojewski@intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2025-04-07Merge drm/drm-next into drm-misc-nextThomas Zimmermann
Backmerging to get v6.15-rc1 into drm-misc-next. Also fixes a build issue when enabling CONFIG_DRM_SCHED_KUNIT_TEST. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
2025-04-07platform/x86: intel_pmc_ipc: add option to build without ACPIDavid E. Box
Introduce a configuration option that allows users to build the intel_pmc_ipc driver without ACPI support. This is useful for systems where ACPI is not available or desired. Based on the discussion from the patch [1], it was necessary to provide this option to accommodate specific use cases. Link: https://patchwork.kernel.org/project/netdevbpf/patch/20250227121522.1802832-6-yong.liang.choong@linux.intel.com/#26280764 [1] Signed-off-by: David E. Box <david.e.box@linux.intel.com> Co-developed-by: Choong Yong Liang <yong.liang.choong@linux.intel.com> Signed-off-by: Choong Yong Liang <yong.liang.choong@linux.intel.com> Reviewed-by: Andy Shevchenko <andy@kernel.org> Link: https://lore.kernel.org/r/20250313085526.1439092-1-yong.liang.choong@linux.intel.com Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>