summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-08-13netfilter: nf_tables: reject duplicate device on updatesPablo Neira Ayuso
A chain/flowtable update with duplicated devices in the same batch is possible. Unfortunately, netdev event path only removes the first device that is found, leaving unregistered the hook of the duplicated device. Check if a duplicated device exists in the transaction batch, bail out with EEXIST in such case. WARNING is hit when unregistering the hook: [49042.221275] WARNING: CPU: 4 PID: 8425 at net/netfilter/core.c:340 nf_hook_entry_head+0xaa/0x150 [49042.221375] CPU: 4 UID: 0 PID: 8425 Comm: nft Tainted: G S 6.16.0+ #170 PREEMPT(full) [...] [49042.221382] RIP: 0010:nf_hook_entry_head+0xaa/0x150 Fixes: 78d9f48f7f44 ("netfilter: nf_tables: add devices to existing flowtable") Fixes: b9703ed44ffb ("netfilter: nf_tables: support for adding new devices to an existing netdev chain") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Florian Westphal <fw@strlen.de>
2025-08-13ipvs: Fix estimator kthreads preferred affinityFrederic Weisbecker
The estimator kthreads' affinity are defined by sysctl overwritten preferences and applied through a plain call to the scheduler's affinity API. However since the introduction of managed kthreads preferred affinity, such a practice shortcuts the kthreads core code which eventually overwrites the target to the default unbound affinity. Fix this with using the appropriate kthread's API. Fixes: d1a89197589c ("kthread: Default affine kthread to its preferred NUMA node") Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Acked-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Florian Westphal <fw@strlen.de>
2025-08-13netfilter: nft_set_pipapo: fix null deref for empty setFlorian Westphal
Blamed commit broke the check for a null scratch map: - if (unlikely(!m || !*raw_cpu_ptr(m->scratch))) + if (unlikely(!raw_cpu_ptr(m->scratch))) This should have been "if (!*raw_ ...)". Use the pattern of the avx2 version which is more readable. This can only be reproduced if avx2 support isn't available. Fixes: d8d871a35ca9 ("netfilter: nft_set_pipapo: merge pipapo_get/lookup") Signed-off-by: Florian Westphal <fw@strlen.de>
2025-08-12selftests: tls: test TCP stealing data from under the TLS socketJakub Kicinski
Check a race where data disappears from the TCP socket after TLS signaled that its ready to receive. ok 6 global.data_steal # RUN tls_basic.base_base ... # OK tls_basic.base_base Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20250807232907.600366-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-12tls: handle data disappearing from under the TLS ULPJakub Kicinski
TLS expects that it owns the receive queue of the TCP socket. This cannot be guaranteed in case the reader of the TCP socket entered before the TLS ULP was installed, or uses some non-standard read API (eg. zerocopy ones). Replace the WARN_ON() and a buggy early exit (which leaves anchor pointing to a freed skb) with real error handling. Wipe the parsing state and tell the reader to retry. We already reload the anchor every time we (re)acquire the socket lock, so the only condition we need to avoid is an out of bounds read (not having enough bytes in the socket for previously parsed record len). If some data was read from under TLS but there's enough in the queue we'll reload and decrypt what is most likely not a valid TLS record. Leading to some undefined behavior from TLS perspective (corrupting a stream? missing an alert? missing an attack?) but no kernel crash should take place. Reported-by: William Liu <will@willsroot.io> Reported-by: Savino Dicanosa <savy@syst3mfailure.io> Link: https://lore.kernel.org/tFjq_kf7sWIG3A7CrCg_egb8CVsT_gsmHAK0_wxDPJXfIzxFAMxqmLwp3MlU5EHiet0AwwJldaaFdgyHpeIUCS-3m3llsmRzp9xIOBR4lAI=@syst3mfailure.io Fixes: 84c61fe1a75b ("tls: rx: do not use the standard strparser") Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20250807232907.600366-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-12Merge branch '6.17/scsi-queue' into 6.17/scsi-fixesMartin K. Petersen
Pull in outstanding commits for 6.17. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-08-12ptp: prevent possible ABBA deadlock in ptp_clock_freerun()Jeongjun Park
syzbot reported the following ABBA deadlock: CPU0 CPU1 ---- ---- n_vclocks_store() lock(&ptp->n_vclocks_mux) [1] (physical clock) pc_clock_adjtime() lock(&clk->rwsem) [2] (physical clock) ... ptp_clock_freerun() ptp_vclock_in_use() lock(&ptp->n_vclocks_mux) [3] (physical clock) ptp_clock_unregister() posix_clock_unregister() lock(&clk->rwsem) [4] (virtual clock) Since ptp virtual clock is registered only under ptp physical clock, both ptp_clock and posix_clock must be physical clocks for ptp_vclock_in_use() to lock &ptp->n_vclocks_mux and check ptp->n_vclocks. However, when unregistering vclocks in n_vclocks_store(), the locking ptp->n_vclocks_mux is a physical clock lock, but clk->rwsem of ptp_clock_unregister() called through device_for_each_child_reverse() is a virtual clock lock. Therefore, clk->rwsem used in CPU0 and clk->rwsem used in CPU1 are different locks, but in lockdep, a false positive occurs because the possibility of deadlock is determined through lock-class. To solve this, lock subclass annotation must be added to the posix_clock rwsem of the vclock. Reported-by: syzbot+7cfb66a237c4a5fb22ad@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=7cfb66a237c4a5fb22ad Fixes: 73f37068d540 ("ptp: support ptp physical/virtual clocks conversion") Signed-off-by: Jeongjun Park <aha310510@gmail.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20250728062649.469882-1-aha310510@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-08-12ixgbe: prevent from unwanted interface name changesJedrzej Jagielski
Users of the ixgbe driver report that after adding devlink support by the commit a0285236ab93 ("ixgbe: add initial devlink support") their configs got broken due to unwanted changes of interface names. It's caused by automatic phys_port_name generation during devlink port initialization flow. To prevent from that set no_phys_port_name flag for ixgbe devlink ports. Reported-by: David Howells <dhowells@redhat.com> Closes: https://lore.kernel.org/netdev/3452224.1745518016@warthog.procyon.org.uk/ Reported-by: David Kaplan <David.Kaplan@amd.com> Closes: https://lore.kernel.org/netdev/LV3PR12MB92658474624CCF60220157199470A@LV3PR12MB9265.namprd12.prod.outlook.com/ Fixes: a0285236ab93 ("ixgbe: add initial devlink support") Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-08-12devlink: let driver opt out of automatic phys_port_name generationJedrzej Jagielski
Currently when adding devlink port, phys_port_name is automatically generated within devlink port initialization flow. As a result adding devlink port support to driver may result in forced changes of interface names, which breaks already existing network configs. This is an expected behavior but in some scenarios it would not be preferable to provide such limitation for legacy driver not being able to keep 'pre-devlink' interface name. Add flag no_phys_port_name to devlink_port_attrs struct which indicates if devlink should not alter name of interface. Suggested-by: Jiri Pirko <jiri@resnulli.us> Link: https://lore.kernel.org/all/nbwrfnjhvrcduqzjl4a2jafnvvud6qsbxlvxaxilnryglf4j7r@btuqrimnfuly/ Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-08-12drm/amdgpu: fix task hang from failed job submission during process killLiu01 Tong
During process kill, drm_sched_entity_flush() will kill the vm entities. The following job submissions of this process will fail, and the resources of these jobs have not been released, nor have the fences been signalled, causing tasks to hang and timeout. Fix by check entity status in amdgpu_vm_ready() and avoid submit jobs to stopped entity. v2: add amdgpu_vm_ready() check before amdgpu_vm_clear_freed() in function amdgpu_cs_vm_handling(). Fixes: 1f02f2044bda ("drm/amdgpu: Avoid extra evict-restore process.") Signed-off-by: Liu01 Tong <Tong.Liu01@amd.com> Signed-off-by: Lin.Cao <lincao12@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> (cherry picked from commit f101c13a8720c73e67f8f9d511fbbeda95bcedb1)
2025-08-12ASoC: stm: stm32_i2s: Fix calc_clk_div() error handling in determine_rate()Sergio Perez Gonzalez
calc_clk_div() will only return a non-zero value (-EINVAL) in case of error. On the other hand, req->rate is an unsigned long. It seems quite odd that req->rate would be assigned a negative value, which is clearly not a rate, and success would be returned. Reinstate previous logic, which would just return error. Fixes: afd529d74002 ("ASoC: stm: stm32_i2s: convert from round_rate() to determine_rate()") Link: https://scan7.scan.coverity.com/#/project-view/53936/11354?selectedIssue=1647702 Signed-off-by: Sergio Perez Gonzalez <sperezglz@gmail.com> Link: https://patch.msgid.link/20250729020052.404617-1-sperezglz@gmail.com Signed-off-by: Mark Brown <broonie@kernel.org>
2025-08-12drm/amdgpu: fix incorrect vm flags to map boJack Xiao
It should use vm flags instead of pte flags to specify bo vm attributes. Fixes: 7946340fa389 ("drm/amdgpu: Move csa related code to separate file") Signed-off-by: Jack Xiao <Jack.Xiao@amd.com> Reviewed-by: Likun Gao <Likun.Gao@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> (cherry picked from commit b08425fa77ad2f305fe57a33dceb456be03b653f)
2025-08-12drm/amdgpu: fix vram reservation issueYiPeng Chai
The vram block allocation flag must be cleared before making vram reservation, otherwise reserving addresses within the currently freed memory range will always fail. Fixes: c9cad937c0c5 ("drm/amdgpu: add drm buddy support to amdgpu") Signed-off-by: YiPeng Chai <YiPeng.Chai@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> (cherry picked from commit d38eaf27de1b8584f42d6fb3f717b7ec44b3a7a1)
2025-08-12drm/amdgpu: Add PSP fw version check for fw reserve GFX commandFrank Min
The fw reserved GFX command is only supported starting from PSP fw version 0x3a0e14 and 0x3b0e0d. Older versions do not support this command. Add a version guard to ensure the command is only used when the running PSP fw meets the minimum version requirement. This ensures backward compatibility and safe operation across fw revisions. Fixes: a3b7f9c306e1 ("drm/amdgpu: reclaim psp fw reservation memory region") Signed-off-by: Frank Min <Frank.Min@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> (cherry picked from commit 065e23170a1e09bc9104b761183e59562a029619)
2025-08-12io_uring/net: commit partial buffers on retryJens Axboe
Ring provided buffers are potentially only valid within the single execution context in which they were acquired. io_uring deals with this and invalidates them on retry. But on the networking side, if MSG_WAITALL is set, or if the socket is of the streaming type and too little was processed, then it will hang on to the buffer rather than recycle or commit it. This is problematic for two reasons: 1) If someone unregisters the provided buffer ring before a later retry, then the req->buf_list will no longer be valid. 2) If multiple sockers are using the same buffer group, then multiple receives can consume the same memory. This can cause data corruption in the application, as either receive could land in the same userspace buffer. Fix this by disallowing partial retries from pinning a provided buffer across multiple executions, if ring provided buffers are used. Cc: stable@vger.kernel.org Reported-by: pt x <superman.xpt@gmail.com> Fixes: c56e022c0a27 ("io_uring: add support for user mapped provided buffer ring") Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-08-12Merge tag 'pull-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds
Pull habanalabs fix from Al Viro: "Yet another use-after-free fix due to dma_buf_fd() misuse" * tag 'pull-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: habanalabs: fix UAF in export_dmabuf()
2025-08-12ACPI: EC: Relax sanity check of the ECDT ID stringArmin Wolf
It turns out that the ECDT table inside the ThinkBook 14 G7 IML contains a valid EC description but an invalid ID string ("_SB.PC00.LPCB.EC0"). Ignoring this ECDT based on the invalid ID string prevents the kernel from detecting the built-in touchpad, so relax the sanity check of the ID string and only reject ECDTs with empty ID strings. Reported-by: Ilya K <me@0upti.me> Fixes: 7a0d59f6a913 ("ACPI: EC: Ignore ECDT tables with an invalid ID string") Signed-off-by: Armin Wolf <W_Armin@gmx.de> Tested-by: Ilya K <me@0upti.me> Link: https://patch.msgid.link/20250729062038.303734-1-W_Armin@gmx.de Cc: 6.16+ <stable@vger.kernel.org> # 6.16+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-08-12drm/xe/hwmon: Add SW clamp for power limits writesKarthik Poosa
Clamp writes to power limits powerX_crit/currX_crit, powerX_cap, powerX_max, to the maximum supported by the pcode mailbox when sysfs-provided values exceed this limit. Although the pcode already performs clamping, values beyond the pcode mailbox's supported range get truncated, leading to incorrect critical power settings. This patch ensures proper clamping to prevent such truncation. v2: - Address below review comments. (Riana) - Split comments into multiple sentences. - Use local variables for readability. - Add a debug log. - Use u64 instead of unsigned long. v3: - Change drm_dbg logs to drm_info. (Badal) v4: - Rephrase the drm_info log. (Rodrigo, Riana) - Rename variable max_mbx_power_limit to max_supp_power_limit, as limit is same for platforms with and without mailbox power limit support. Signed-off-by: Karthik Poosa <karthik.poosa@intel.com> Fixes: 92d44a422d0d ("drm/xe/hwmon: Expose card reactive critical power") Fixes: fb1b70607f73 ("drm/xe/hwmon: Expose power attributes") Reviewed-by: Riana Tauro <riana.tauro@intel.com> Link: https://lore.kernel.org/r/20250808185310.3466529-1-karthik.poosa@intel.com Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> (cherry picked from commit d301eb950da59f962bafe874cf5eb6d61a85b2c2) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2025-08-12drm/xe: Defer buffer object shrinker write-backs and GPU waitsThomas Hellström
When the xe buffer-object shrinker allows GPU waits and write-back, (typically from kswapd), perform multiple passes, skipping subsequent passes if the shrinker number of scanned objects target is reached. 1) Without GPU waits and write-back 2) Without write-back 3) With both GPU-waits and write-back This is to avoid stalls and costly write- and readbacks unless they are really necessary. v2: - Don't test for scan completion twice. (Stuart Summers) - Update tags. Reported-by: melvyn <melvyn2@dnsense.pub> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/5557 Cc: Summers Stuart <stuart.summers@intel.com> Fixes: 00c8efc3180f ("drm/xe: Add a shrinker for xe bos") Cc: <stable@vger.kernel.org> # v6.15+ Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: Stuart Summers <stuart.summers@intel.com> Link: https://lore.kernel.org/r/20250805074842.11359-1-thomas.hellstrom@linux.intel.com (cherry picked from commit 80944d334182ce5eb27d00e2bf20a88bfc32dea1) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2025-08-12drm/xe/migrate: prevent potential UAFMatthew Auld
If we hit the error path, the previous fence (if there is one) has already been put() prior to this, so doing a fence_wait could lead to UAF. Tweak the flow to do to the put() until after we do the wait. Fixes: 270172f64b11 ("drm/xe: Update xe_ttm_access_memory to use GPU for non-visible access") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Maciej Patelczyk <maciej.patelczyk@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Stuart Summers <stuart.summers@intel.com> Link: https://lore.kernel.org/r/20250731093807.207572-8-matthew.auld@intel.com (cherry picked from commit 9b7ca35ed28fe5fad86e9d9c24ebd1271e4c9c3e) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2025-08-12drm/xe/migrate: don't overflow max copy sizeMatthew Auld
With non-page aligned copy, we need to use 4 byte aligned pitch, however the size itself might still be close to our maximum of ~8M, and so the dimensions of the copy can easily exceed the S16_MAX limit of the copy command leading to the following assert: xe 0000:03:00.0: [drm] Assertion `size / pitch <= ((s16)(((u16)~0U) >> 1))` failed! platform: BATTLEMAGE subplatform: 1 graphics: Xe2_HPG 20.01 step A0 media: Xe2_HPM 13.01 step A1 tile: 0 VRAM 10.0 GiB GT: 0 type 1 WARNING: CPU: 23 PID: 10605 at drivers/gpu/drm/xe/xe_migrate.c:673 emit_copy+0x4b5/0x4e0 [xe] To fix this account for the pitch when calculating the number of current bytes to copy. Fixes: 270172f64b11 ("drm/xe: Update xe_ttm_access_memory to use GPU for non-visible access") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Maciej Patelczyk <maciej.patelczyk@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Stuart Summers <stuart.summers@intel.com> Link: https://lore.kernel.org/r/20250731093807.207572-7-matthew.auld@intel.com (cherry picked from commit 8c2d61e0e916e077fda7e7b8e67f25ffe0f361fc) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2025-08-12drm/xe/migrate: prevent infinite recursionMatthew Auld
If the buf + offset is not aligned to XE_CAHELINE_BYTES we fallback to using a bounce buffer. However the bounce buffer here is allocated on the stack, and the only alignment requirement here is that it's naturally aligned to u8, and not XE_CACHELINE_BYTES. If the bounce buffer is also misaligned we then recurse back into the function again, however the new bounce buffer might also not be aligned, and might never be until we eventually blow through the stack, as we keep recursing. Instead of using the stack use kmalloc, which should respect the power-of-two alignment request here. Fixes a kernel panic when triggering this path through eudebug. v2 (Stuart): - Add build bug check for power-of-two restriction - s/EINVAL/ENOMEM/ Fixes: 270172f64b11 ("drm/xe: Update xe_ttm_access_memory to use GPU for non-visible access") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Maciej Patelczyk <maciej.patelczyk@intel.com> Cc: Stuart Summers <stuart.summers@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Stuart Summers <stuart.summers@intel.com> Link: https://lore.kernel.org/r/20250731093807.207572-6-matthew.auld@intel.com (cherry picked from commit 38b34e928a08ba594c4bbf7118aa3aadacd62fff) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2025-08-12Merge tag 'for-6.17-rc1-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull btrfs fixes from David Sterba: - fix bug in qgroups reporting incorrect usage for higher level qgroups - in zoned mode, do not select metadata group as finish target - convert xarray lock to RCU when trying to release extent buffer to avoid a deadlock - do not allow relocation on partially dropped subvolumes, which is normally not possible but has been reported on old filesystems - in tree-log, report errors on missing block group when unaccounting log tree extent buffers - with large folios, fix range length when processing ordered extents * tag 'for-6.17-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: btrfs: fix iteration bug in __qgroup_excl_accounting() btrfs: zoned: do not select metadata BG as finish target btrfs: do not allow relocation of partially dropped subvolumes btrfs: error on missing block group when unaccounting log tree extent buffers btrfs: fix wrong length parameter for btrfs_cleanup_ordered_extents() btrfs: make btrfs_cleanup_ordered_extents() support large folios btrfs: fix subpage deadlock in try_release_subpage_extent_buffer()
2025-08-12Merge tag 'snp_cache_coherency' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip - Add a mitigation for a cache coherency vulnerability when running an SNP guest which makes sure all cache lines belonging to a 4K page are evicted after latter has been converted to a guest-private page [ SNP: Secure Nested Paging - not to be confused with Single Nucleotide Polymorphism, which is the more common use of that TLA. I am on a mission to write out the more obscure TLAs in order to keep track of them. Because while math tells us that there are only about 17k different combinations of three-letter acronyms using English letters (26^3), I am convinced that somehow Intel, AMD and ARM have together figured out new mathematics, and have at least a million different TLAs that they use. - Linus ] * tag 'snp_cache_coherency' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/sev: Evict cache lines during SNP memory validation
2025-08-12gpio: mlxbf3: use platform_get_irq_optional()David Thompson
The gpio-mlxbf3 driver interfaces with two GPIO controllers, device instance 0 and 1. There is a single IRQ resource shared between the two controllers, and it is found in the ACPI table for device instance 0. The driver should not use platform_get_irq(), otherwise this error is logged when probing instance 1: mlxbf3_gpio MLNXBF33:01: error -ENXIO: IRQ index 0 not found Cc: stable@vger.kernel.org Fixes: cd33f216d241 ("gpio: mlxbf3: Add gpio driver support") Signed-off-by: David Thompson <davthompson@nvidia.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/ce70b98a201ce82b9df9aa80ac7a5eeaa2268e52.1754928650.git.davthompson@nvidia.com Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2025-08-12Revert "gpio: mlxbf3: only get IRQ for device instance 0"David Thompson
This reverts commit 10af0273a35ab4513ca1546644b8c853044da134. While this change was merged, it is not the preferred solution. During review of a similar change to the gpio-mlxbf2 driver, the use of "platform_get_irq_optional" was identified as the preferred solution, so let's use it for gpio-mlxbf3 driver as well. Cc: stable@vger.kernel.org Fixes: 10af0273a35a ("gpio: mlxbf3: only get IRQ for device instance 0") Signed-off-by: David Thompson <davthompson@nvidia.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/8d2b630c71b3742f2c74242cf7d602706a6108e6.1754928650.git.davthompson@nvidia.com Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2025-08-12ACPI: processor: perflib: Move problematic pr->performance checkRafael J. Wysocki
Commit d33bd88ac0eb ("ACPI: processor: perflib: Fix initial _PPC limit application") added a pr->performance check that prevents the frequency QoS request from being added when the given processor has no performance object. Unfortunately, this causes a WARN() in freq_qos_remove_request() to trigger on an attempt to take the given CPU offline later because the frequency QoS object has not been added for it due to the missing performance object. Address this by moving the pr->performance check before calling acpi_processor_get_platform_limit() so it only prevents a limit from being set for the CPU if the performance object is not present. This way, the frequency QoS request is added as it was before the above commit and it is present all the time along with the CPU's cpufreq policy regardless of whether or not the CPU is online. Fixes: d33bd88ac0eb ("ACPI: processor: perflib: Fix initial _PPC limit application") Tested-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: 5.4+ <stable@vger.kernel.org> # 5.4+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://patch.msgid.link/2801421.mvXUDI8C0e@rafael.j.wysocki
2025-08-12Merge tag 'ipsec-2025-08-11' of ↵Paolo Abeni
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec Steffen Klassert says: ==================== pull request (net): ipsec 2025-08-11 1) Fix flushing of all states in xfrm_state_fini. From Sabrina Dubroca. 2) Fix some IPsec software offload features. These got lost with some recent HW offload changes. From Sabrina Dubroca. Please pull or let me know if there are problems. * tag 'ipsec-2025-08-11' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec: udp: also consider secpath when evaluating ipsec use for checksumming xfrm: bring back device check in validate_xmit_xfrm xfrm: restore GSO for SW crypto xfrm: flush all states in xfrm_state_fini ==================== Link: https://patch.msgid.link/20250811092008.731573-1-steffen.klassert@secunet.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12Merge branch ↵Paolo Abeni
'net-prevent-deadlocks-and-mis-configuration-with-per-napi-threaded-config' Jakub Kicinski says: ==================== net: prevent deadlocks and mis-configuration with per-NAPI threaded config Running the test added with a recent fix on a driver with persistent NAPI config leads to a deadlock. The deadlock is fixed by patch 3, patch 2 is I think a more fundamental problem with the way we implemented the config. I hope the fix makes sense, my own thinking is definitely colored by my preference (IOW how the per-queue config RFC was implemented). v1: https://lore.kernel.org/20250808014952.724762-1-kuba@kernel.org ==================== Link: https://patch.msgid.link/20250809001205.1147153-1-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12net: prevent deadlocks when enabling NAPIs with mixed kthread configJakub Kicinski
The following order of calls currently deadlocks if: - device has threaded=1; and - NAPI has persistent config with threaded=0. netif_napi_add_weight_config() dev->threaded == 1 napi_kthread_create() napi_enable() napi_restore_config() napi_set_threaded(0) napi_stop_kthread() while (NAPIF_STATE_SCHED) msleep(20) We deadlock because disabled NAPI has STATE_SCHED set. Creating a thread in netif_napi_add() just to destroy it in napi_disable() is fairly ugly in the first place. Let's read both the device config and the NAPI config in netif_napi_add(). Fixes: e6d76268813d ("net: Update threaded state in napi config in netif_set_threaded") Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Joe Damato <joe@dama.to> Link: https://patch.msgid.link/20250809001205.1147153-4-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12net: update NAPI threaded config even for disabled NAPIsJakub Kicinski
We have to make sure that all future NAPIs will have the right threaded state when the state is configured on the device level. We chose not to have an "unset" state for threaded, and not to wipe the NAPI config clean when channels are explicitly disabled. This means the persistent config structs "exist" even when their NAPIs are not instantiated. Differently put - the NAPI persistent state lives in the net_device (ncfg == struct napi_config): ,--- [napi 0] - [napi 1] [dev] | | `--- [ncfg 0] - [ncfg 1] so say we a device with 2 queues but only 1 enabled: ,--- [napi 0] [dev] | `--- [ncfg 0] - [ncfg 1] now we set the device to threaded=1: ,---------- [napi 0 (thr:1)] [dev(thr:1)] | `---------- [ncfg 0 (thr:1)] - [ncfg 1 (thr:?)] Since [ncfg 1] was not attached to a NAPI during configuration we skipped it. If we create a NAPI for it later it will have the old setting (presumably disabled). One could argue if this is right or not "in principle", but it's definitely not how things worked before per-NAPI config.. Fixes: 2677010e7793 ("Add support to set NAPI threaded for individual NAPI") Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Joe Damato <joe@dama.to> Link: https://patch.msgid.link/20250809001205.1147153-3-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12selftests: drv-net: don't assume device has only 2 queuesJakub Kicinski
The test is implicitly assuming the device only has 2 queues. A real device will likely have more. The exact problem is that because NAPIs get added to the list from the head, the netlink dump reports them in reverse order. So the naive napis[0] will actually likely give us the _last_ NAPI, not the first one. Re-enable all the NAPIs instead of hard-coding 2 in the test. This way the NAPIs we operated on will always reappear, doesn't matter where they were in the registration order. Fixes: e6d76268813d ("net: Update threaded state in napi config in netif_set_threaded") Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Joe Damato <joe@dama.to> Link: https://patch.msgid.link/20250809001205.1147153-2-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12ASoC: codecs: Call strscpy() with correct size argumentThorsten Blum
In aw8xxxx_profile_info(), strscpy() is called with the length of the source string "null" rather than the size of the destination buffer. This is fine as long as the destination buffer is larger than the source string, but we should still use the destination buffer size instead to call strscpy() as intended. And since 'name' points to the fixed-size buffer 'uinfo->value.enumerated.name', we can safely omit the size argument and let strscpy() infer it using sizeof() and remove 'name'. Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Link: https://patch.msgid.link/20250810214144.1985-2-thorsten.blum@linux.dev Signed-off-by: Mark Brown <broonie@kernel.org>
2025-08-12docs: Fix name for net.ipv4.udp_child_hash_entriesJordan Rife
udp_child_ehash_entries -> udp_child_hash_entries Fixes: 9804985bf27f ("udp: Introduce optional per-netns hash table.") Signed-off-by: Jordan Rife <jordan@jrife.io> Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250808185800.1189042-1-jordan@jrife.io Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12Merge branch 'fix-broken-link-with-th1520-gmac-when-linkspeed-changes'Paolo Abeni
Yao Zi says: ==================== Fix broken link with TH1520 GMAC when linkspeed changes It's noted that on TH1520 SoC, the GMAC's link becomes broken after the link speed is changed (for example, running ethtool -s eth0 speed 100 on the peer when negotiated to 1Gbps), but the GMAC could function normally if the speed is brought back to the initial. Just like many other SoCs utilizing STMMAC IP, we need to adjust the TX clock supplying TH1520's GMAC through some SoC-specific glue registers when linkspeed changes. But it's found that after the full kernel startup, reading from them results in garbage and writing to them makes no effect, which is the cause of broken link. Further testing shows perisys-apb4-hclk must be ungated for normal access to Th1520 GMAC APB glue registers, which is neither described in dt-binding nor acquired by the driver. This series expands the dt-binding of TH1520's GMAC to allow an extra "APB glue registers interface clock", instructs the driver to acquire and enable the clock, and finally supplies CLK_PERISYS_APB4_HCLK for TH1520's GMACs in SoC devicetree. v2: https://lore.kernel.org/netdev/20250801091240.46114-1-ziyao@disroot.org/ v1: https://lore.kernel.org/all/20250729093734.40132-1-ziyao@disroot.org/ ==================== Link: https://patch.msgid.link/20250808093655.48074-2-ziyao@disroot.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12riscv: dts: thead: Add APB clocks for TH1520 GMACsYao Zi
Describe perisys-apb4-hclk as the APB clock for TH1520 SoC, which is essential for accessing GMAC glue registers. Fixes: 7e756671a664 ("riscv: dts: thead: Add TH1520 ethernet nodes") Signed-off-by: Yao Zi <ziyao@disroot.org> Reviewed-by: Drew Fustini <fustini@kernel.org> Tested-by: Drew Fustini <fustini@kernel.org> Link: https://patch.msgid.link/20250808093655.48074-5-ziyao@disroot.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12net: stmmac: thead: Get and enable APB clock on initializationYao Zi
It's necessary to adjust the MAC TX clock when the linkspeed changes, but it's noted such adjustment always fails on TH1520 SoC, and reading back from APB glue registers that control clock generation results in garbage, causing broken link. With some testing, it's found a clock must be ungated for access to APB glue registers. Without any consumer, the clock is automatically disabled during late kernel startup. Let's get and enable it if it's described in devicetree. For backward compatibility with older devicetrees, probing won't fail if the APB clock isn't found. In this case, we emit a warning since the link will break if the speed changes. Fixes: 33a1a01e3afa ("net: stmmac: Add glue layer for T-HEAD TH1520 SoC") Signed-off-by: Yao Zi <ziyao@disroot.org> Tested-by: Drew Fustini <fustini@kernel.org> Reviewed-by: Drew Fustini <fustini@kernel.org> Link: https://patch.msgid.link/20250808093655.48074-4-ziyao@disroot.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12dt-bindings: net: thead,th1520-gmac: Describe APB interface clockYao Zi
Besides ones for GMAC core and peripheral registers, the TH1520 GMAC requires one more clock for configuring APB glue registers. Describe it in the binding. Fixes: f920ce04c399 ("dt-bindings: net: Add T-HEAD dwmac support") Signed-off-by: Yao Zi <ziyao@disroot.org> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Reviewed-by: Drew Fustini <fustini@kernel.org> Link: https://patch.msgid.link/20250808093655.48074-3-ziyao@disroot.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12net: mdiobus: release reset_gpio in mdiobus_unregister_device()Buday Csaba
reset_gpio is claimed in mdiobus_register_device(), but it is not released in mdiobus_unregister_device(). It is instead only released when the whole MDIO bus is unregistered. When a device uses the reset_gpio property, it becomes impossible to unregister it and register it again, because the GPIO remains claimed. This patch resolves that issue. Fixes: bafbdd527d56 ("phylib: Add device reset GPIO support") # see notes Reviewed-by: Andrew Lunn <andrew@lunn.ch> Cc: Csókás Bence <csokas.bence@prolan.hu> [ csokas.bence: Resolve rebase conflict and clarify msg ] Signed-off-by: Buday Csaba <buday.csaba@prolan.hu> Link: https://patch.msgid.link/20250807135449.254254-2-csokas.bence@prolan.hu Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12net: phy: nxp-c45-tja11xx: fix the PHY ID mismatch issue when using C45Clark Wang
TJA1103/04/20/21 support both C22 and C45 accessing methods. The TJA11xx driver has implemented the match_phy_device() API. However, it does not handle the C45 ID. If C45 was used to access TJA11xx, match_phy_device() would always return false due to phydev->phy_id only used by C22 being empty, resulting in the generic phy driver being used for TJA11xx PHYs. Therefore, check phydev->c45_ids.device_ids[MDIO_MMD_PMAPMD] when using C45. Fixes: 1b76b2497aba ("net: phy: nxp-c45-tja11xx: simplify .match_phy_device OP") Signed-off-by: Clark Wang <xiaoning.wang@nxp.com> Link: https://patch.msgid.link/20250807040832.2455306-1-xiaoning.wang@nxp.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-08-12drm/i915/psr: Do not trigger Frame Change events from frontbuffer flushJouni Högander
We want to get rid of triggering "Frame Change" events from frontbuffer flush calls. We are about to move using TRANS_PUSH register for this on LunarLake and onwards. Touching TRANS_PUSH register from fronbuffer flush would be problematic as it's written by DSB as well. Fix this by using intel_psr_exit when flush or invalidate is done on LunarLake and onwards. This is not possible on AlderLake and MeteorLake due to HW bug in PSR2 disable. This patch is also fixing problems with cursor plane where cursor is disappearing or duplicate cursor is seen on the screen. v2: Commit message updated Bspec: 68927, 68934, 66624 Reported-by: Janna Martl <janna.martl109@gmail.com> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/5522 Fixes: 411ad63877bb ("drm/i915/psr: Use SFF_CTL on invalidate/flush for LunarLake onwards") Tested-by: Janna Martl <janna.martl109@gmail.com> Signed-off-by: Jouni Högander <jouni.hogander@intel.com> Reviewed-by: Suraj Kandpal <suraj.kandpal@intel.com> Link: https://lore.kernel.org/r/20250801062905.564453-1-jouni.hogander@intel.com (cherry picked from commit 46fb38cb20c0d185a6391ab524b23e0e0219c41f) Signed-off-by: Tvrtko Ursulin <tursulin@ursulin.net>
2025-08-12drm/i915/fbc: fix the implementation of wa_18038517565Vinod Govindapillai
As per the wa_18038517565, we need to disable FBC compressor clock gating before enabling FBC and enable after disabling FBC. Placing the enabling of clock gating in the fbc deactivate function can make the above wa logic go wrong in case of frontbuffer rendering FBC mechanism. FBC deactivate can get called during fb invalidate and then the corresponding FBC activate can get called without properly disabling the clock gating and can result in compression stalled. So move the enable clock gating at the end of one FBC session after FBC is completely disabled for a pipe. Bspec: 74212, 72197, 69741, 65555 Fixes: 010363c46189 ("drm/i915/display: implement wa_18038517565") Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com> Reviewed-by: Jouni Högander <jouni.hogander@intel.com> Link: https://lore.kernel.org/r/20250729124648.288497-1-vinod.govindapillai@intel.com (cherry picked from commit 82dde0407ab126f8413fd6c51429e5057ced5ba2) Signed-off-by: Tvrtko Ursulin <tursulin@ursulin.net>
2025-08-12drm/panfrost: Print RSS for tiler heap BO's in debugfs GEMS fileAdrián Larumbe
Otherwise it would display the virtual allocation size, which is often much bigger than the RSS. Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com> Fixes: e48ade5e23ba ("drm/panfrost: show device-wide list of DRM GEM objects over DebugFS") Tested-by: Christopher Healy <healych@amazon.com> Reviewed-by: Daniel Stone <daniels@collabora.com> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Link: https://lore.kernel.org/r/20250808010235.2831853-1-adrian.larumbe@collabora.com
2025-08-11proc: proc_maps_open allow proc_mem_open to return NULLJialin Wang
The commit 65c66047259f ("proc: fix the issue of proc_mem_open returning NULL") caused proc_maps_open() to return -ESRCH when proc_mem_open() returns NULL. This breaks legitimate /proc/<pid>/maps access for kernel threads since kernel threads have NULL mm_struct. The regression causes perf to fail and exit when profiling a kernel thread: # perf record -v -g -p $(pgrep kswapd0) ... couldn't open /proc/65/task/65/maps This patch partially reverts the commit to fix it. Link: https://lkml.kernel.org/r/20250807165455.73656-1-wjl.linux@gmail.com Fixes: 65c66047259f ("proc: fix the issue of proc_mem_open returning NULL") Signed-off-by: Jialin Wang <wjl.linux@gmail.com> Cc: Penglei Jiang <superman.xpt@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-11mm/mremap: avoid expensive folio lookup on mremap folio pte batchLorenzo Stoakes
It was discovered in the attached report that commit f822a9a81a31 ("mm: optimize mremap() by PTE batching") introduced a significant performance regression on a number of metrics on x86-64, most notably stress-ng.bigheap.realloc_calls_per_sec - indicating a 37.3% regression in number of mremap() calls per second. I was able to reproduce this locally on an intel x86-64 raptor lake system, noting an average of 143,857 realloc calls/sec (with a stddev of 4,531 or 3.1%) prior to this patch being applied, and 81,503 afterwards (stddev of 2,131 or 2.6%) - a 43.3% regression. During testing I was able to determine that there was no meaningful difference in efforts to optimise the folio_pte_batch() operation, nor checking folio_test_large(). This is within expectation, as a regression this large is likely to indicate we are accessing memory that is not yet in a cache line (and perhaps may even cause a main memory fetch). The expectation by those discussing this from the start was that vm_normal_folio() (invoked by mremap_folio_pte_batch()) would likely be the culprit due to having to retrieve memory from the vmemmap (which mremap() page table moves does not otherwise do, meaning this is inevitably cold memory). I was able to definitively determine that this theory is indeed correct and the cause of the issue. The solution is to restore part of an approach previously discarded on review, that is to invoke pte_batch_hint() which explicitly determines, through reference to the PTE alone (thus no vmemmap lookup), what the PTE batch size may be. On platforms other than arm64 this is currently hardcoded to return 1, so this naturally resolves the issue for x86-64, and for arm64 introduces little to no overhead as the pte cache line will be hot. With this patch applied, we move from 81,503 realloc calls/sec to 138,701 (stddev of 496.1 or 0.4%), which is a -3.6% regression, however accounting for the variance in the original result, this is broadly restoring performance to its prior state. Link: https://lkml.kernel.org/r/20250807185819.199865-1-lorenzo.stoakes@oracle.com Fixes: f822a9a81a31 ("mm: optimize mremap() by PTE batching") Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202508071609.4e743d7c-lkp@intel.com Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Pedro Falcato <pfalcato@suse.de> Reviewed-by: Barry Song <baohua@kernel.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Dev Jain <dev.jain@arm.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Barry Song <baohua@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-11userfaultfd: fix a crash in UFFDIO_MOVE when PMD is a migration entrySuren Baghdasaryan
When UFFDIO_MOVE encounters a migration PMD entry, it proceeds with obtaining a folio and accessing it even though the entry is swp_entry_t. Add the missing check and let split_huge_pmd() handle migration entries. While at it also remove unnecessary folio check. [surenb@google.com: remove extra folio check, per David] Link: https://lkml.kernel.org/r/20250807200418.1963585-1-surenb@google.com Link: https://lkml.kernel.org/r/20250806220022.926763-1-surenb@google.com Fixes: adef440691ba ("userfaultfd: UFFDIO_MOVE uABI") Signed-off-by: Suren Baghdasaryan <surenb@google.com> Reported-by: syzbot+b446dbe27035ef6bd6c2@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/68794b5c.a70a0220.693ce.0050.GAE@google.com/ Reviewed-by: Peter Xu <peterx@redhat.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Lokesh Gidra <lokeshgidra@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-11mm: pass page directly instead of using folio_pageDev Jain
In commit_anon_folio_batch(), we iterate over all pages pointed to by the PTE batch. Therefore we need to know the first page of the batch; currently we derive that via folio_page(folio, 0), but, that takes us to the first (head) page of the folio instead - our PTE batch may lie in the middle of the folio, leading to incorrectness. Bite the bullet and throw away the micro-optimization of reusing the folio in favour of code simplicity. Derive the page and the folio in change_pte_range, and pass the page too to commit_anon_folio_batch to fix the aforementioned issue. Link: https://lkml.kernel.org/r/20250806145611.3962-1-dev.jain@arm.com Fixes: cac1db8c3aad ("mm: optimize mprotect() by PTE batching") Reported-by: syzbot+57bcc752f0df8bb1365c@syzkaller.appspotmail.com Signed-off-by: Dev Jain <dev.jain@arm.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Debugged-by: David Hildenbrand <david@redhat.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Barry Song <baohua@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Hugh Dickins <hughd@google.com> Cc: Jann Horn <jannh@google.com> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Lance Yang <ioworker0@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Peter Xu <peterx@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Cc: Yang Shi <yang@os.amperecomputing.com> Cc: Yicong Yang <yangyicong@hisilicon.com> Cc: Zhenhua Huang <quic_zhenhuah@quicinc.com> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-11selftests/proc: fix string literal warning in proc-maps-race.cSukrut Heroorkar
This change resolves non literal string format warning invoked for proc-maps-race.c while compiling. proc-maps-race.c:205:17: warning: format not a string literal and no format arguments [-Wformat-security] 205 | printf(text); | ^~~~~~ proc-maps-race.c:209:17: warning: format not a string literal and no format arguments [-Wformat-security] 209 | printf(text); | ^~~~~~ proc-maps-race.c: In function `print_last_lines': proc-maps-race.c:224:9: warning: format not a string literal and no format arguments [-Wformat-security] 224 | printf(start); | ^~~~~~ Add string format specifier %s for the printf calls in both print_first_lines() and print_last_lines() thus resolving the warnings. The test executes fine after this change thus causing no effect to the functional behavior of the test. Link: https://lkml.kernel.org/r/20250804225633.841777-1-hsukrut3@gmail.com Fixes: aadc099c480f ("selftests/proc: add verbose mode for /proc/pid/maps tearing tests") Signed-off-by: Sukrut Heroorkar <hsukrut3@gmail.com> Acked-by: Suren Baghdasaryan <surenb@google.com> Cc: David Hunter <david.hunter.linux@gmail.com> Cc: Shuah Khan <shuah@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-11cifs: avoid extra calls to strlen() in cifs_get_spnego_key()Dmitry Antipov
Since 'snprintf()' returns the number of characters emitted, an output position may be advanced with this return value rather than using an explicit calls to 'strlen()'. Compile tested only. Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Signed-off-by: Steve French <stfrench@microsoft.com>
2025-08-11cifs: Fix collect_sample() to handle any iterator typeDavid Howells
collect_sample() is used to gather samples of the data in a Write op for analysis to try and determine if the compression algorithm is likely to achieve anything more quickly than actually running the compression algorithm. However, collect_sample() assumes that the data it is going to be sampling is stored in an ITER_XARRAY-type iterator (which it now should never be) and doesn't actually check that it is before accessing the underlying xarray directly. Fix this by replacing the code with a loop that just uses the standard iterator functions to sample every other 2KiB block, skipping the intervening ones. It's not quite the same as the previous algorithm as it doesn't necessarily align to the pages within an ordinary write from the pagecache. Note that the btrfs code from which this was derived samples the inode's pagecache directly rather than the iterator - but that doesn't necessarily work for network filesystems if O_DIRECT is in operation. Fixes: 94ae8c3fee94 ("smb: client: compress: LZ77 code improvements cleanup") Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Paulo Alcantara (Red Hat) <pc@manguebit.org> cc: Enzo Matsumiya <ematsumiya@suse.de> cc: Shyam Prasad N <sprasad@microsoft.com> cc: Tom Talpey <tom@talpey.com> cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org Signed-off-by: Steve French <stfrench@microsoft.com>