summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-03-19x86/mm: Enable broadcast TLB invalidation for multi-threaded processesRik van Riel
There is not enough room in the 12-bit ASID address space to hand out broadcast ASIDs to every process. Only hand out broadcast ASIDs to processes when they are observed to be simultaneously running on 4 or more CPUs. This also allows single threaded process to continue using the cheaper, local TLB invalidation instructions like INVLPGB. Due to the structure of flush_tlb_mm_range(), the INVLPGB flushing is done in a generically named broadcast_tlb_flush() function which can later also be used for Intel RAR. Combined with the removal of unnecessary lru_add_drain calls() (see https://lore.kernel.org/r/20241219153253.3da9e8aa@fangorn) this results in a nice performance boost for the will-it-scale tlb_flush2_threads test on an AMD Milan system with 36 cores: - vanilla kernel: 527k loops/second - lru_add_drain removal: 731k loops/second - only INVLPGB: 527k loops/second - lru_add_drain + INVLPGB: 1157k loops/second Profiling with only the INVLPGB changes showed while TLB invalidation went down from 40% of the total CPU time to only around 4% of CPU time, the contention simply moved to the LRU lock. Fixing both at the same time about doubles the number of iterations per second from this case. Comparing will-it-scale tlb_flush2_threads with several different numbers of threads on a 72 CPU AMD Milan shows similar results. The number represents the total number of loops per second across all the threads: threads tip INVLPGB 1 315k 304k 2 423k 424k 4 644k 1032k 8 652k 1267k 16 737k 1368k 32 759k 1199k 64 636k 1094k 72 609k 993k 1 and 2 thread performance is similar with and without INVLPGB, because INVLPGB is only used on processes using 4 or more CPUs simultaneously. The number is the median across 5 runs. Some numbers closer to real world performance can be found at Phoronix, thanks to Michael: https://www.phoronix.com/news/AMD-INVLPGB-Linux-Benefits [ bp: - Massage - :%s/\<static_cpu_has\>/cpu_feature_enabled/cgi - :%s/\<clear_asid_transition\>/mm_clear_asid_transition/cgi - Fold in a 0day bot fix: https://lore.kernel.org/oe-kbuild-all/202503040000.GtiWUsBm-lkp@intel.com ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Nadav Amit <nadav.amit@gmail.com> Link: https://lore.kernel.org/r/20250226030129.530345-11-riel@surriel.com
2025-03-19x86/mm: Add global ASID process exit helpersRik van Riel
A global ASID is allocated for the lifetime of a process. Free the global ASID at process exit time. [ bp: Massage, create helpers, hide details inside them. ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-10-riel@surriel.com
2025-03-19x86/mm: Handle global ASID context switch and TLB flushRik van Riel
Do context switch and TLB flush support for processes that use a global ASID and PCID across all CPUs. At both context switch time and TLB flush time, it needs to be checked whether a task is switching to a global ASID, and, if so, reload the TLB with the new ASID as appropriate. In both code paths, the TLB flush is avoided if a global ASID is used, because the global ASIDs are always kept up to date across CPUs, even when the process is not running on a CPU. [ bp: - Massage - :%s/\<static_cpu_has\>/cpu_feature_enabled/cgi ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-9-riel@surriel.com
2025-03-19x86/mm: Add global ASID allocation helper functionsRik van Riel
Add functions to manage global ASID space. Multithreaded processes that are simultaneously active on 4 or more CPUs can get a global ASID, resulting in the same PCID being used for that process on every CPU. This in turn will allow the kernel to use hardware-assisted TLB flushing through AMD INVLPGB or Intel RAR for these processes. [ bp: - Extend use_global_asid() comment - s/X86_BROADCAST_TLB_FLUSH/BROADCAST_TLB_FLUSH/g - other touchups ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-8-riel@surriel.com
2025-03-19x86/mm: Use broadcast TLB flushing in page reclaimRik van Riel
Page reclaim tracks only the CPU(s) where the TLB needs to be flushed, rather than all the individual mappings that may be getting invalidated. Use broadcast TLB flushing when that is available. [ bp: Massage commit message. ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-7-riel@surriel.com
2025-03-19x86/mm: Use INVLPGB for kernel TLB flushesRik van Riel
Use broadcast TLB invalidation for kernel addresses when available. Remove the need to send IPIs for kernel TLB flushes. [ bp: Integrate dhansen's comments additions, merge the flush_tlb_all() change into this one too. ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-5-riel@surriel.com
2025-03-19x86/mm: Add INVLPGB support codeRik van Riel
Add helper functions and definitions needed to use broadcast TLB invalidation on AMD CPUs. [ bp: - Cleanup commit message - Improve and expand comments - push the preemption guards inside the invlpgb* helpers - merge improvements from dhansen - add !CONFIG_BROADCAST_TLB_FLUSH function stubs because Clang can't do DCE properly yet and looks at the inline asm and complains about it getting a u64 argument on 32-bit code ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-4-riel@surriel.com
2025-03-19x86/mm: Add INVLPGB feature and Kconfig entryRik van Riel
In addition, the CPU advertises the maximum number of pages that can be shot down with one INVLPGB instruction in CPUID. Save that information for later use. [ bp: use cpu_has(), typos, massage. ] Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20250226030129.530345-3-riel@surriel.com
2025-03-19x86/mm: Consolidate full flush threshold decisionRik van Riel
Reduce code duplication by consolidating the decision point for whether to do individual invalidations or a full flush inside get_flush_tlb_info(). Suggested-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Dave Hansen <dave.hansen@intel.com> Link: https://lore.kernel.org/r/20250226030129.530345-2-riel@surriel.com
2025-03-19x86/mm: Check return value from memblock_phys_alloc_range()Philip Redkin
At least with CONFIG_PHYSICAL_START=0x100000, if there is < 4 MiB of contiguous free memory available at this point, the kernel will crash and burn because memblock_phys_alloc_range() returns 0 on failure, which leads memblock_phys_free() to throw the first 4 MiB of physical memory to the wolves. At a minimum it should fail gracefully with a meaningful diagnostic, but in fact everything seems to work fine without the weird reserve allocation. Signed-off-by: Philip Redkin <me@rarity.fan> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Rik van Riel <riel@surriel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/94b3e98f-96a7-3560-1f76-349eb95ccf7f@rarity.fan
2025-03-19Merge tag 'v6.14-rc7' into x86/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-03-19Merge branch 'xa_alloc_cyclic-checks'David S. Miller
Michal Swiatkowski says: ==================== fix xa_alloc_cyclic() return checks Pierre Riteau <pierre@stackhpc.com> found suspicious handling an error from xa_alloc_cyclic() in scheduler code [1]. The same is done in few other places. v1 --> v2: [2] * add fixes tags * fix also the same usage in dpll and phy [1] https://lore.kernel.org/netdev/20250213223610.320278-1-pierre@stackhpc.com/ [2] https://lore.kernel.org/netdev/20250214132453.4108-1-michal.swiatkowski@linux.intel.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2025-03-19phy: fix xa_alloc_cyclic() error handlingMichal Swiatkowski
xa_alloc_cyclic() can return 1, which isn't an error. To prevent situation when the caller of this function will treat it as no error do a check only for negative here. Fixes: 384968786909 ("net: phy: Introduce ethernet link topology representation") Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2025-03-19dpll: fix xa_alloc_cyclic() error handlingMichal Swiatkowski
In case of returning 1 from xa_alloc_cyclic() (wrapping) ERR_PTR(1) will be returned, which will cause IS_ERR() to be false. Which can lead to dereference not allocated pointer (pin). Fix it by checking if err is lower than zero. This wasn't found in real usecase, only noticed. Credit to Pierre. Fixes: 97f265ef7f5b ("dpll: allocate pin ids in cycle") Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Reviewed-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2025-03-19devlink: fix xa_alloc_cyclic() error handlingMichal Swiatkowski
In case of returning 1 from xa_alloc_cyclic() (wrapping) ERR_PTR(1) will be returned, which will cause IS_ERR() to be false. Which can lead to dereference not allocated pointer (rel). Fix it by checking if err is lower than zero. This wasn't found in real usecase, only noticed. Credit to Pierre. Fixes: c137743bce02 ("devlink: introduce object and nested devlink relationship infra") Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2025-03-19MIPS: cm: Fix warning if MIPS_CM is disabledThomas Bogendoerfer
Commit e27fbe16af5c ("MIPS: cm: Detect CM quirks from device tree") introduced arch/mips/include/asm/mips-cm.h:119:13: error: ‘mips_cm_update_property’ defined but not used [-Werror=unused-function] Fix this by making empty function implementation inline Fixes: e27fbe16af5c ("MIPS: cm: Detect CM quirks from device tree") Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2025-03-19MIPS: Fix Macro nameAbhishek Tamboli
Add missing underscore in PCI_CFGA_BUS macro definition for consistency. Link: https://bugzilla.kernel.org/show_bug.cgi?id=219744 Signed-off-by: Abhishek Tamboli <abhishektamboli9@gmail.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2025-03-19Merge patch series "netfs: Miscellaneous fixes"Christian Brauner
David Howells <dhowells@redhat.com> says: Here are some miscellaneous fixes and changes for netfslib: (1) Fix the collection of results during a pause in transmission. (2) Call ->invalidate_cache() only if provided. (3) Fix the rolling buffer to not hammer atomic bit clears when loading from readahead. (4) Fix netfs_unbuffered_read() to return ssize_t. * patches from https://lore.kernel.org/r/20250314164201.1993231-1-dhowells@redhat.com: netfs: Fix netfs_unbuffered_read() to return ssize_t rather than int netfs: Fix rolling_buffer_load_from_ra() to not clear mark bits netfs: Call `invalidate_cache` only if implemented netfs: Fix collection of results during pause when collection offloaded Link: https://lore.kernel.org/r/20250314164201.1993231-1-dhowells@redhat.com Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-19netfs: Fix netfs_unbuffered_read() to return ssize_t rather than intDavid Howells
Fix netfs_unbuffered_read() to return an ssize_t rather than an int as netfs_wait_for_read() returns ssize_t and this gets implicitly truncated. Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20250314164201.1993231-5-dhowells@redhat.com Acked-by: "Paulo Alcantara (Red Hat)" <pc@manguebit.com> cc: Jeff Layton <jlayton@kernel.org> cc: Viacheslav Dubeyko <slava@dubeyko.com> cc: Alex Markuze <amarkuze@redhat.com> cc: Ilya Dryomov <idryomov@gmail.com> cc: ceph-devel@vger.kernel.org cc: linux-fsdevel@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-19netfs: Fix rolling_buffer_load_from_ra() to not clear mark bitsDavid Howells
rolling_buffer_load_from_ra() looms large in the perf report because it loops around doing an atomic clear for each of the three mark bits per folio. However, this is both inefficient (it would be better to build a mask and atomically AND them out) and unnecessary as they shouldn't be set. Fix this by removing the loop. Fixes: ee4cdf7ba857 ("netfs: Speed up buffered reading") Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20250314164201.1993231-4-dhowells@redhat.com Acked-by: "Paulo Alcantara (Red Hat)" <pc@manguebit.com> cc: Jeff Layton <jlayton@kernel.org> cc: Steve French <sfrench@samba.org> cc: Paulo Alcantara <pc@manguebit.com> cc: netfs@lists.linux.dev cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-19netfs: Call `invalidate_cache` only if implementedMax Kellermann
Many filesystems such as NFS and Ceph do not implement the `invalidate_cache` method. On those filesystems, if writing to the cache (`NETFS_WRITE_TO_CACHE`) fails for some reason, the kernel crashes like this: BUG: kernel NULL pointer dereference, address: 0000000000000000 #PF: supervisor instruction fetch in kernel mode #PF: error_code(0x0010) - not-present page PGD 0 P4D 0 Oops: Oops: 0010 [#1] SMP PTI CPU: 9 UID: 0 PID: 3380 Comm: kworker/u193:11 Not tainted 6.13.3-cm4all1-hp #437 Hardware name: HP ProLiant DL380 Gen9/ProLiant DL380 Gen9, BIOS P89 10/17/2018 Workqueue: events_unbound netfs_write_collection_worker RIP: 0010:0x0 Code: Unable to access opcode bytes at 0xffffffffffffffd6. RSP: 0018:ffff9b86e2ca7dc0 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 7fffffffffffffff RDX: 0000000000000001 RSI: ffff89259d576a18 RDI: ffff89259d576900 RBP: ffff89259d5769b0 R08: ffff9b86e2ca7d28 R09: 0000000000000002 R10: ffff89258ceaca80 R11: 0000000000000001 R12: 0000000000000020 R13: ffff893d158b9338 R14: ffff89259d576900 R15: ffff89259d5769b0 FS: 0000000000000000(0000) GS:ffff893c9fa40000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffffffffffffffd6 CR3: 000000054442e003 CR4: 00000000001706f0 Call Trace: <TASK> ? __die+0x1f/0x60 ? page_fault_oops+0x15c/0x460 ? try_to_wake_up+0x2d2/0x530 ? exc_page_fault+0x5e/0x100 ? asm_exc_page_fault+0x22/0x30 netfs_write_collection_worker+0xe9f/0x12b0 ? xs_poll_check_readable+0x3f/0x80 ? xs_stream_data_receive_workfn+0x8d/0x110 process_one_work+0x134/0x2d0 worker_thread+0x299/0x3a0 ? __pfx_worker_thread+0x10/0x10 kthread+0xba/0xe0 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x30/0x50 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 </TASK> Modules linked in: CR2: 0000000000000000 This patch adds the missing `NULL` check. Fixes: 0e0f2dfe880f ("netfs: Dispatch write requests to process a writeback slice") Fixes: 288ace2f57c9 ("netfs: New writeback implementation") Signed-off-by: Max Kellermann <max.kellermann@ionos.com> Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20250314164201.1993231-3-dhowells@redhat.com Acked-by: "Paulo Alcantara (Red Hat)" <pc@manguebit.com> cc: netfs@lists.linux.dev cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: stable@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-19netfs: Fix collection of results during pause when collection offloadedDavid Howells
A netfs read request can run in one of two modes: for synchronous reads writes, the app thread does the collection of results and for asynchronous reads, this is offloaded to a worker thread. This is controlled by the NETFS_RREQ_OFFLOAD_COLLECTION flag. Now, if a subrequest incurs an error, the NETFS_RREQ_PAUSE flag is set to stop the issuing loop temporarily from issuing more subrequests until a retry is successful or the request is abandoned. When the issuing loop sees NETFS_RREQ_PAUSE, it jumps to netfs_wait_for_pause() which will wait for the PAUSE flag to be cleared - and whilst it is waiting, it will call out to the collector as more results acrue... But this is the wrong thing to do if OFFLOAD_COLLECTION is set as we can then end up with both the app thread and the work item collecting results simultaneously. This manifests itself occasionally when running the generic/323 xfstest against multichannel cifs as an oops that's a bit random but frequently involving io_submit() (the test does lots of simultaneous async DIO reads). Fix this by only doing the collection in netfs_wait_for_pause() if the NETFS_RREQ_OFFLOAD_COLLECTION is not set. Fixes: e2d46f2ec332 ("netfs: Change the read result collector to only use one work item") Reported-by: Steve French <stfrench@microsoft.com> Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20250314164201.1993231-2-dhowells@redhat.com Acked-by: "Paulo Alcantara (Red Hat)" <pc@manguebit.com> cc: Paulo Alcantara <pc@manguebit.com> cc: Jeff Layton <jlayton@kernel.org> cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-19IB/mad: Check available slots before posting receive WRsMaher Sanalla
The ib_post_receive_mads() function handles posting receive work requests (WRs) to MAD QPs and is called in two cases: 1) When a MAD port is opened. 2) When a receive WQE is consumed upon receiving a new MAD. Whereas, if MADs arrive during the port open phase, a race condition might cause an extra WR to be posted, exceeding the QP’s capacity. This leads to failures such as: infiniband mlx5_0: ib_post_recv failed: -12 infiniband mlx5_0: Couldn't post receive WRs infiniband mlx5_0: Couldn't start port infiniband mlx5_0: Couldn't open port 1 Fix this by checking the current receive count before posting a new WR. If the QP’s receive queue is full, do not post additional WRs. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Maher Sanalla <msanalla@nvidia.com> Link: https://patch.msgid.link/c4984ba3c3a98a5711a558bccefcad789587ecf1.1741875592.git.leon@kernel.org Signed-off-by: Leon Romanovsky <leon@kernel.org>
2025-03-19RDMA/mana_ib: Fix integer overflow during queue creationKonstantin Taranov
Check queue size during CQ creation for users to prevent overflow of u32. Fixes: bec127e45d9f ("RDMA/mana_ib: create kernel-level CQs") Signed-off-by: Konstantin Taranov <kotaranov@microsoft.com> Link: https://patch.msgid.link/1742312744-14370-1-git-send-email-kotaranov@linux.microsoft.com Reviewed-by: Long Li <longli@microsoft.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
2025-03-19Merge tag 'rtw-next-2025-03-13' of https://github.com/pkshih/rtwJohannes Berg
Ping-Ke Shih says: ==================== rtw-next patches for v6.15 Some minor fixes and refinements of rtw89. The only major change is rtw88: * support RTL8814AE/RTL8814AU ==================== Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2025-03-19fs: load the ->i_sb pointer once in inode_sb_list_{add,del}Mateusz Guzik
While this may sound like a pedantic clean up, it does in fact impact code generation -- the patched add routine is slightly smaller. Signed-off-by: Mateusz Guzik <mjguzik@gmail.com> Link: https://lore.kernel.org/r/20250319004635.1820589-1-mjguzik@gmail.com Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-19fuse: fix uring race condition for null dereference of fcJoanne Koong
There is a race condition leading to a kernel crash from a null dereference when attemping to access fc->lock in fuse_uring_create_queue(). fc may be NULL in the case where another thread is creating the uring in fuse_uring_create() and has set fc->ring but has not yet set ring->fc when fuse_uring_create_queue() reads ring->fc. There is another race condition as well where in fuse_uring_register(), ring->nr_queues may still be 0 and not yet set to the new value when we compare qid against it. This fix sets fc->ring only after ring->fc and ring->nr_queues have been set, which guarantees now that ring->fc is a proper pointer when any queues are created and ring->nr_queues reflects the right number of queues if ring is not NULL. We must use smp_store_release() and smp_load_acquire() semantics to ensure the ordering will remain correct where fc->ring is assigned only after ring->fc and ring->nr_queues have been assigned. Signed-off-by: Joanne Koong <joannelkoong@gmail.com> Link: https://lore.kernel.org/r/20250318003028.3330599-1-joannelkoong@gmail.com Fixes: 24fe962c86f5 ("fuse: {io-uring} Handle SQEs - register commands") Acked-by: Miklos Szeredi <mszeredi@redhat.com> Reviewed-by: Bernd Schubert <bschubert@ddn.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-19afs: Fix afs_atcell_get_link() to check if ws_cell is unset firstDavid Howells
Fix afs_atcell_get_link() to check if the workstation cell is unset before doing the RCU pathwalk bit where we dereference that. Fixes: 823869e1e616 ("afs: Fix afs_atcell_get_link() to handle RCU pathwalk") Reported-by: syzbot+76a6f18e3af82e84f264@syzkaller.appspotmail.com Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/2481796.1742296819@warthog.procyon.org.uk Tested-by: syzbot+76a6f18e3af82e84f264@syzkaller.appspotmail.com cc: Marc Dionne <marc.dionne@auristor.com> cc: linux-afs@lists.infradead.org cc: linux-fsdevel@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-19umount: Allow superblock owners to force umountTrond Myklebust
Loosen the permission check on forced umount to allow users holding CAP_SYS_ADMIN privileges in namespaces that are privileged with respect to the userns that originally mounted the filesystem. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Link: https://lore.kernel.org/r/12f212d4ef983714d065a6bb372fbb378753bf4c.1742315194.git.trond.myklebust@hammerspace.com Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-03-19ASoC: tas2781: Support dsp firmware Alpha and Beta seaiesShenghao Ding
For calibration, basic version does not contain any calibration addresses, it depends on calibration tool to convey the addresses to the driver. Since Alpha and Beta firmware, all the calibration addresses are saved into the firmware. Signed-off-by: Shenghao Ding <shenghao-ding@ti.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Link: https://patch.msgid.link/20250313093238.1184-1-shenghao-ding@ti.com
2025-03-19Merge branch 'for-linus' into for-nextTakashi Iwai
Back-merge of 6.14 devel branch for further developments of TAS codecsBack-merge of 6.14 devel branch for further developments. Signed-off-by: Takashi Iwai <tiwai@suse.de>
2025-03-19docs/zh_CN: fix spelling mistakePeng Jiang
The word watermark was misspelled as "watemark". Signed-off-by: Peng Jiang <jiang.peng9@zte.com.cn> Reviewed-by: Dongliang Mu <dzm91@hust.edu.cn> Signed-off-by: Alex Shi <alexs@kernel.org> Link: https://lore.kernel.org/r/20250317100811126QvOaWRPxSgm2ttU5faitl@zte.com.cn
2025-03-19docs/Chinese: change the disclaimer wordsAlex Shi
2 paragraph warning and note take a bit more space, let's merge them together, and guide to other maintainer and reviewers. Cc: Yanteng Si <si.yanteng@linux.dev> Cc: Dongliang Mu <dzm91@hust.edu.cn> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-doc@vger.kernel.org Reviewed-by: Tang Yizhou <yizhou.tang@shopee.com> Reviewed-by: Dongliang Mu <dzm91@hust.edu.cn> Signed-off-by: Alex Shi <alexs@kernel.org> Link: https://lore.kernel.org/r/20250305025101.27717-1-alexs@kernel.org
2025-03-19docs/zh_CN: Add snp-tdx-threat-model index Chinese translationYuxian Mao
Translate .../security/snp-tdx-threat-model.rst into Chinese. Update the translation through commit "cdae7e8a69c3" ("docs/MAINTAINERS: Update my email address") Fixed pdfdocs warning by Alex Shi. Reviewed-by: Yanteng Si <si.yanteng@linux.dev> Signed-off-by: Yuxian Mao <maoyuxian@cqsoftware.com.cn> Signed-off-by: Alex Shi <alexs@kernel.org> Link: https://lore.kernel.org/r/20250304071401.117780-1-maoyuxian@cqsoftware.com.cn
2025-03-19xfrm: Remove unnecessary NULL check in xfrm_lookup_with_ifid()Dan Carpenter
This NULL check is unnecessary and can be removed. It confuses Smatch static analysis tool because it makes Smatch think that xfrm_lookup_with_ifid() can return a mix of NULL pointers and errors so it creates a lot of false positives. Remove it. Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2025-03-18bpf: clarify a misleading verifier error messageAndrea Terzolo
The current verifier error message states that tail_calls are not allowed in non-JITed programs with BPF-to-BPF calls. While this is accurate, it is not the only scenario where this restriction applies. Some architectures do not support this feature combination even when programs are JITed. This update improves the error message to better reflect these limitations. Suggested-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Signed-off-by: Andrea Terzolo <andreaterzolo3@gmail.com> Acked-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Link: https://lore.kernel.org/r/20250318083551.8192-1-andreaterzolo3@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-03-18Merge branch 'bpf-reject-attaching-fexit-fmod_ret-to-noreturn-functions'Alexei Starovoitov
Yafang Shao says: ==================== Attaching fexit probes to functions marked with __noreturn may lead to unpredictable behavior. To avoid this, we will reject attaching probes to such functions. Currently, there is no ideal solution, so we will hardcode a check for all __noreturn functions. Once a more robust solution is implemented, this workaround can be removed. v4->v5: - Remove unnecessary functions (Alexei) - Use BTF_ID directly (Alexei) v3->v4: https://lore.kernel.org/bpf/20250317121735.86515-1-laoar.shao@gmail.com/ - Reject also fmod_ret (Alexei) - Fix build warnings and remove unnecessary functions (Alexei) v1->v2: https://lore.kernel.org/bpf/20250223062735.3341-1-laoar.shao@gmail.com/ - keep tools/objtool/noreturns.h as is (Josh) - Add noreturns.h to objtool/sync-check.sh (Josh) - Add verbose for the reject and simplify the test case (Song) v1: https://lore.kernel.org/bpf/20250211023359.1570-1-laoar.shao@gmail.com/ ==================== Link: https://patch.msgid.link/20250318114447.75484-1-laoar.shao@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-03-18selftests/bpf: Add selftest for attaching fexit to __noreturn functionsYafang Shao
The reuslt: $ tools/testing/selftests/bpf/test_progs --name=fexit_noreturns #99/1 fexit_noreturns/noreturns:OK #99 fexit_noreturns:OK Summary: 1/1 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20250318114447.75484-3-laoar.shao@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-03-18bpf: Reject attaching fexit/fmod_ret to __noreturn functionsYafang Shao
If we attach fexit/fmod_ret to __noreturn functions, it will cause an issue that the bpf trampoline image will be left over even if the bpf link has been destroyed. Take attaching do_exit() with fexit for example. The fexit works as follows, bpf_trampoline + __bpf_tramp_enter + percpu_ref_get(&tr->pcref); + call do_exit() + __bpf_tramp_exit + percpu_ref_put(&tr->pcref); Since do_exit() never returns, the refcnt of the trampoline image is never decremented, preventing it from being freed. That can be verified with as follows, $ bpftool link show <<<< nothing output $ grep "bpf_trampoline_[0-9]" /proc/kallsyms ffffffffc04cb000 t bpf_trampoline_6442526459 [bpf] <<<< leftover In this patch, all functions annotated with __noreturn are rejected, except for the following cases: - Functions that result in a system reboot, such as panic, machine_real_restart and rust_begin_unwind - Functions that are never executed by tasks, such as rest_init and cpu_startup_entry - Functions implemented in assembly, such as rewind_stack_and_make_dead and xen_cpu_bringup_again, lack an associated BTF ID. With this change, attaching fexit probes to functions like do_exit() will be rejected. $ ./fexit libbpf: prog 'fexit': BPF program load failed: -EINVAL libbpf: prog 'fexit': -- BEGIN PROG LOAD LOG -- Attaching fexit/fmod_ret to __noreturn functions is rejected. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20250318114447.75484-2-laoar.shao@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-03-18bpf: Only fails the busy counter check in bpf_cgrp_storage_get if it creates ↵Martin KaFai Lau
storage The current cgrp storage has a percpu counter, bpf_cgrp_storage_busy, to detect potential deadlock at a spin_lock that the local storage acquires during new storage creation. There are false positives. It turns out to be too noisy in production. For example, a bpf prog may be doing a bpf_cgrp_storage_get on map_a. An IRQ comes in and triggers another bpf_cgrp_storage_get on a different map_b. It will then trigger the false positive deadlock check in the percpu counter. On top of that, both are doing lookup only and no need to create new storage, so practically it does not need to acquire the spin_lock. The bpf_task_storage_get already has a strategy to minimize this false positive by only failing if the bpf_task_storage_get needs to create a new storage and the percpu counter is busy. Creating a new storage is the only time it must acquire the spin_lock. This patch borrows the same idea. Unlike task storage that has a separate variant for tracing (_recur) and non-tracing, this patch stays with one bpf_cgrp_storage_get helper to keep it simple for now in light of the upcoming res_spin_lock. The variable could potentially use a better name noTbusy instead of nobusy. This patch follows the same naming in bpf_task_storage_get for now. I have tested it by temporarily adding noinline to the cgroup_storage_lookup(), traced it by fentry, and the fentry program succeeded in calling bpf_cgrp_storage_get(). Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20250318182759.3676094-1-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-03-18PCI: vmd: Make vmd_dev::cfg_lock a raw_spinlock_t typeRyo Takakura
The access to the PCI config space via pci_ops::read and pci_ops::write is a low-level hardware access. The functions can be accessed with disabled interrupts even on PREEMPT_RT. The pci_lock is a raw_spinlock_t for this purpose. A spinlock_t becomes a sleeping lock on PREEMPT_RT, so it cannot be acquired with disabled interrupts. The vmd_dev::cfg_lock is accessed in the same context as the pci_lock. Make vmd_dev::cfg_lock a raw_spinlock_t type so it can be used with interrupts disabled. This was reported as: BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 Call Trace: rt_spin_lock+0x4e/0x130 vmd_pci_read+0x8d/0x100 [vmd] pci_user_read_config_byte+0x6f/0xe0 pci_read_config+0xfe/0x290 sysfs_kf_bin_read+0x68/0x90 Signed-off-by: Ryo Takakura <ryotkkr98@gmail.com> Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Acked-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> [bigeasy: reword commit message] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-off-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com> Link: https://lore.kernel.org/r/20250218080830.ufw3IgyX@linutronix.de [kwilczynski: commit log] Signed-off-by: Krzysztof Wilczyński <kwilczynski@kernel.org> [bhelgaas: add back report info from https://lore.kernel.org/lkml/20241218115951.83062-1-ryotkkr98@gmail.com/] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2025-03-18perf kwork: Remove unreachable judgmentsFeng Yang
When s2[i] = '\0', if s1[i] != '\0', it will be judged by ret, and if s1[i] = '\0', it will be judegd by !s1[i]. So in reality, s2 [i] will never make a judgment Signed-off-by: Feng Yang <yangfeng@kylinos.cn> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250314031013.94480-1-yangfeng59949@163.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-18rust: platform: impl Send + Sync for platform::DeviceDanilo Krummrich
Commit 4d320e30ee04 ("rust: platform: fix unrestricted &mut platform::Device") changed the definition of platform::Device and discarded the implicitly derived Send and Sync traits. This isn't required by upstream code yet, and hence did not cause any issues. However, it is relied on by upcoming drivers, hence add it back in. Signed-off-by: Danilo Krummrich <dakr@kernel.org> Link: https://lore.kernel.org/r/20250318212940.137577-2-dakr@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-03-18rust: pci: impl Send + Sync for pci::DeviceDanilo Krummrich
Commit 7b948a2af6b5 ("rust: pci: fix unrestricted &mut pci::Device") changed the definition of pci::Device and discarded the implicitly derived Send and Sync traits. This isn't required by upstream code yet, and hence did not cause any issues. However, it is relied on by upcoming drivers, hence add it back in. Signed-off-by: Danilo Krummrich <dakr@kernel.org> Link: https://lore.kernel.org/r/20250318212940.137577-1-dakr@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-03-18orangefs: Bufmap deadcodingDr. David Alan Gilbert
orangefs_bufmap_shift_query() last use was removed in 2018 by commit 9f8fd53cd055 ("orangefs: revamp block sizes") orangefs_bufmap_page_fill() last use was removed in 2021 by commit 0c4b7cadd1ad ("Orangef: implement orangefs_readahead.") Remove them. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Signed-off-by: Mike Marshall <hubcap@omnibond.com>
2025-03-18perf python: Check if there is space to copy all the eventArnaldo Carvalho de Melo
The pyrf_event__new() method copies the event obtained from the perf ring buffer to a structure that will then be turned into a python object for further consumption, so it copies perf_event.header.size bytes to its 'event' member: $ pahole -C pyrf_event /tmp/build/perf-tools-next/python/perf.cpython-312-x86_64-linux-gnu.so struct pyrf_event { PyObject ob_base; /* 0 16 */ struct evsel * evsel; /* 16 8 */ struct perf_sample sample; /* 24 312 */ /* XXX last struct has 7 bytes of padding, 2 holes */ /* --- cacheline 5 boundary (320 bytes) was 16 bytes ago --- */ union perf_event event; /* 336 4168 */ /* size: 4504, cachelines: 71, members: 4 */ /* member types with holes: 1, total: 2 */ /* paddings: 1, sum paddings: 7 */ /* last cacheline: 24 bytes */ }; $ It was doing so without checking if the event just obtained has more than that space, fix it. This isn't a proper, final solution, as we need to support larger events, but for the time being we at least bounds check and document it. Fixes: 877108e42b1b9ba6 ("perf tools: Initial python binding") Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250312203141.285263-7-acme@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-18perf python: Don't keep a raw_data pointer to consumed ring buffer spaceArnaldo Carvalho de Melo
When processing tracepoints the perf python binding was parsing the event before calling perf_mmap__consume(&md->core) in pyrf_evlist__read_on_cpu(). But part of this event parsing was to set the perf_sample->raw_data pointer to the payload of the event, which then could be overwritten by other event before tracepoint fields were asked for via event.prev_comm in a python program, for instance. This also happened with other fields, but strings were were problems were surfacing, as there is UTF-8 validation for the potentially garbled data. This ended up showing up as (with some added debugging messages): ( field 'prev_comm' ret=0x7f7c31f65110, raw_size=68 ) ( field 'prev_pid' ret=0x7f7c23b1bed0, raw_size=68 ) ( field 'prev_prio' ret=0x7f7c239c0030, raw_size=68 ) ( field 'prev_state' ret=0x7f7c239c0250, raw_size=68 ) time 14771421785867 prev_comm= prev_pid=1919907691 prev_prio=796026219 prev_state=0x303a32313175 ==> ( XXX '��' len=16, raw_size=68) ( field 'next_comm' ret=(nil), raw_size=68 ) Traceback (most recent call last): File "/home/acme/git/perf-tools-next/tools/perf/python/tracepoint.py", line 51, in <module> main() File "/home/acme/git/perf-tools-next/tools/perf/python/tracepoint.py", line 46, in main event.next_comm, ^^^^^^^^^^^^^^^ AttributeError: 'perf.sample_event' object has no attribute 'next_comm' When event.next_comm was asked for, the PyUnicode_FromString() python API would fail and that tracepoint field wouldn't be available, stopping the tools/perf/python/tracepoint.py test tool. But, since we already do a copy of the whole event in pyrf_event__new, just use it and while at it remove what was done in in e8968e654191390a ("perf python: Fix pyrf_evlist__read_on_cpu event consuming") because we don't really need to wait for parsing the sample before declaring the event as consumed. This copy is questionable as is now, as it limits the maximum event + sample_type and tracepoint payload to sizeof(union perf_event), this all has been "working" because 'struct perf_event_mmap2', the largest entry in 'union perf_event' is: $ pahole -C perf_event ~/bin/perf | grep mmap2 struct perf_record_mmap2 mmap2; /* 0 4168 */ $ Fixes: bae57e3825a3dded ("perf python: Add support to resolve tracepoint fields") Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250312203141.285263-6-acme@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-18perf python: Decrement the refcount of just created event on failureArnaldo Carvalho de Melo
To avoid a leak if we have the python object but then something happens and we need to return the operation, decrement the offset of the newly created object. Fixes: 377f698db12150a1 ("perf python: Add struct evsel into struct pyrf_event") Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250312203141.285263-5-acme@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-18perf python tracepoint.py: Change the COMM using setproctitle if availableArnaldo Carvalho de Melo
Otherwise when debugging we see just "python" in perf, top, etc. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250312203141.285263-4-acme@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2025-03-18perf python: Remove some unused macros (_PyUnicode_FromString(arg), etc)Arnaldo Carvalho de Melo
When python2 support was removed in e7e9943c87d857da ("perf python: Remove python 2 scripting support"), all use of the _PyUnicode_FromString(arg), _PyUnicode_FromFormat(...), and _PyLong_FromLong(arg) macros was removed as well, so remove it. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Reviewed-by: Ian Rogers <irogers@google.com> Link: https://lore.kernel.org/r/20250312203141.285263-3-acme@kernel.org Signed-off-by: Namhyung Kim <namhyung@kernel.org>