summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)Author
2023-10-30closures: Fix race in closure_sync()Kent Overstreet
As pointed out by Linus, closure_sync() was racy; we could skip blocking immediately after a get() and a put(), but then that would skip any barrier corresponding to the other thread's put() barrier. To fix this, always do the full __closure_sync() sequence whenever any get() has happened and the closure might have been used by other threads. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-30closures: Better memory barriersKent Overstreet
atomic_(dec|sub)_return_release() are a thing now - use them. Also, delete the useless barrier in set_closure_fn(): it's redundant with the memory barrier in closure_put(0. Since closure_put() would now otherwise just have a release barrier, we also need a new barrier when the ref hits 0 - smp_acquire__after_ctrl_dep(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-30Merge tag 'sched-core-2023-10-28' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler updates from Ingo Molnar: "Fair scheduler (SCHED_OTHER) improvements: - Remove the old and now unused SIS_PROP code & option - Scan cluster before LLC in the wake-up path - Use candidate prev/recent_used CPU if scanning failed for cluster wakeup NUMA scheduling improvements: - Improve the VMA access-PID code to better skip/scan VMAs - Extend tracing to cover VMA-skipping decisions - Improve/fix the recently introduced sched_numa_find_nth_cpu() code - Generalize numa_map_to_online_node() Energy scheduling improvements: - Remove the EM_MAX_COMPLEXITY limit - Add tracepoints to track energy computation - Make the behavior of the 'sched_energy_aware' sysctl more consistent - Consolidate and clean up access to a CPU's max compute capacity - Fix uclamp code corner cases RT scheduling improvements: - Drive dl_rq->overloaded with dl_rq->pushable_dl_tasks updates - Drive the ->rto_mask with rt_rq->pushable_tasks updates Scheduler scalability improvements: - Rate-limit updates to tg->load_avg - On x86 disable IBRS when CPU is offline to improve single-threaded performance - Micro-optimize in_task() and in_interrupt() - Micro-optimize the PSI code - Avoid updating PSI triggers and ->rtpoll_total when there are no state changes Core scheduler infrastructure improvements: - Use saved_state to reduce some spurious freezer wakeups - Bring in a handful of fast-headers improvements to scheduler headers - Make the scheduler UAPI headers more widely usable by user-space - Simplify the control flow of scheduler syscalls by using lock guards - Fix sched_setaffinity() vs. CPU hotplug race Scheduler debuggability improvements: - Disallow writing invalid values to sched_rt_period_us - Fix a race in the rq-clock debugging code triggering warnings - Fix a warning in the bandwidth distribution code - Micro-optimize in_atomic_preempt_off() checks - Enforce that the tasklist_lock is held in for_each_thread() - Print the TGID in sched_show_task() - Remove the /proc/sys/kernel/sched_child_runs_first sysctl ... and misc cleanups & fixes" * tag 'sched-core-2023-10-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (82 commits) sched/fair: Remove SIS_PROP sched/fair: Use candidate prev/recent_used CPU if scanning failed for cluster wakeup sched/fair: Scan cluster before scanning LLC in wake-up path sched: Add cpus_share_resources API sched/core: Fix RQCF_ACT_SKIP leak sched/fair: Remove unused 'curr' argument from pick_next_entity() sched/nohz: Update comments about NEWILB_KICK sched/fair: Remove duplicate #include sched/psi: Update poll => rtpoll in relevant comments sched: Make PELT acronym definition searchable sched: Fix stop_one_cpu_nowait() vs hotplug sched/psi: Bail out early from irq time accounting sched/topology: Rename 'DIE' domain to 'PKG' sched/psi: Delete the 'update_total' function parameter from update_triggers() sched/psi: Avoid updating PSI triggers and ->rtpoll_total when there are no state changes sched/headers: Remove comment referring to rq::cpu_load, since this has been removed sched/numa: Complete scanning of inactive VMAs when there is no alternative sched/numa: Complete scanning of partial VMAs regardless of PID activity sched/numa: Move up the access pid reset logic sched/numa: Trace decisions related to skipping VMAs ...
2023-10-30Merge tag 'locking-core-2023-10-28' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Info Molnar: "Futex improvements: - Add the 'futex2' syscall ABI, which is an attempt to get away from the multiplex syscall and adds a little room for extentions, while lifting some limitations. - Fix futex PI recursive rt_mutex waiter state bug - Fix inter-process shared futexes on no-MMU systems - Use folios instead of pages Micro-optimizations of locking primitives: - Improve arch_spin_value_unlocked() on asm-generic ticket spinlock architectures, to improve lockref code generation - Improve the x86-32 lockref_get_not_zero() main loop by adding build-time CMPXCHG8B support detection for the relevant lockref code, and by better interfacing the CMPXCHG8B assembly code with the compiler - Introduce arch_sync_try_cmpxchg() on x86 to improve sync_try_cmpxchg() code generation. Convert some sync_cmpxchg() users to sync_try_cmpxchg(). - Micro-optimize rcuref_put_slowpath() Locking debuggability improvements: - Improve CONFIG_DEBUG_RT_MUTEXES=y to have a fast-path as well - Enforce atomicity of sched_submit_work(), which is de-facto atomic but was un-enforced previously. - Extend <linux/cleanup.h>'s no_free_ptr() with __must_check semantics - Fix ww_mutex self-tests - Clean up const-propagation in <linux/seqlock.h> and simplify the API-instantiation macros a bit RT locking improvements: - Provide the rt_mutex_*_schedule() primitives/helpers and use them in the rtmutex code to avoid recursion vs. rtlock on the PI state. - Add nested blocking lockdep asserts to rt_mutex_lock(), rtlock_lock() and rwbase_read_lock() .. plus misc fixes & cleanups" * tag 'locking-core-2023-10-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits) futex: Don't include process MM in futex key on no-MMU locking/seqlock: Fix grammar in comment alpha: Fix up new futex syscall numbers locking/seqlock: Propagate 'const' pointers within read-only methods, remove forced type casts locking/lockdep: Fix string sizing bug that triggers a format-truncation compiler-warning locking/seqlock: Change __seqprop() to return the function pointer locking/seqlock: Simplify SEQCOUNT_LOCKNAME() locking/atomics: Use atomic_try_cmpxchg_release() to micro-optimize rcuref_put_slowpath() locking/atomic, xen: Use sync_try_cmpxchg() instead of sync_cmpxchg() locking/atomic/x86: Introduce arch_sync_try_cmpxchg() locking/atomic: Add generic support for sync_try_cmpxchg() and its fallback locking/seqlock: Fix typo in comment futex/requeue: Remove unnecessary ‘NULL’ initialization from futex_proxy_trylock_atomic() locking/local, arch: Rewrite local_add_unless() as a static inline function locking/debug: Fix debugfs API return value checks to use IS_ERR() locking/ww_mutex/test: Make sure we bail out instead of livelock locking/ww_mutex/test: Fix potential workqueue corruption locking/ww_mutex/test: Use prng instead of rng to avoid hangs at bootup futex: Add sys_futex_requeue() futex: Add flags2 argument to futex_requeue() ...
2023-10-30Merge tag 'bcachefs-2023-10-30' of https://evilpiepirate.org/git/bcachefsLinus Torvalds
Pull initial bcachefs updates from Kent Overstreet: "Here's the bcachefs filesystem pull request. One new patch since last week: the exportfs constants ended up conflicting with other filesystems that are also getting added to the global enum, so switched to new constants picked by Amir. The only new non fs/bcachefs/ patch is the objtool patch that adds bcachefs functions to the list of noreturns. The patch that exports osq_lock() has been dropped for now, per Ingo" * tag 'bcachefs-2023-10-30' of https://evilpiepirate.org/git/bcachefs: (2781 commits) exportfs: Change bcachefs fid_type enum to avoid conflicts bcachefs: Refactor memcpy into direct assignment bcachefs: Fix drop_alloc_keys() bcachefs: snapshot_create_lock bcachefs: Fix snapshot skiplists during snapshot deletion bcachefs: bch2_sb_field_get() refactoring bcachefs: KEY_TYPE_error now counts towards i_sectors bcachefs: Fix handling of unknown bkey types bcachefs: Switch to unsafe_memcpy() in a few places bcachefs: Use struct_size() bcachefs: Correctly initialize new buckets on device resize bcachefs: Fix another smatch complaint bcachefs: Use strsep() in split_devs() bcachefs: Add iops fields to bch_member bcachefs: Rename bch_sb_field_members -> bch_sb_field_members_v1 bcachefs: New superblock section members_v2 bcachefs: Add new helper to retrieve bch_member from sb bcachefs: bucket_lock() is now a sleepable lock bcachefs: fix crc32c checksum merge byte order problem bcachefs: Fix bch2_inode_delete_keys() ...
2023-10-30Merge tag 'nfsd-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/cel/linuxLinus Torvalds
Pull nfsd updates from Chuck Lever: "This release completes the SunRPC thread scheduler work that was begun in v6.6. The scheduler can now find an svc thread to wake in constant time and without a list walk. Thanks again to Neil Brown for this overhaul. Lorenzo Bianconi contributed infrastructure for a netlink-based NFSD control plane. The long-term plan is to provide the same functionality as found in /proc/fs/nfsd, plus some interesting additions, and then migrate the NFSD user space utilities to netlink. A long series to overhaul NFSD's NFSv4 operation encoding was applied in this release. The goals are to bring this family of encoding functions in line with the matching NFSv4 decoding functions and with the NFSv2 and NFSv3 XDR functions, preparing the way for better memory safety and maintainability. A further improvement to NFSD's write delegation support was contributed by Dai Ngo. This adds a CB_GETATTR callback, enabling the server to retrieve cached size and mtime data from clients holding write delegations. If the server can retrieve this information, it does not have to recall the delegation in some cases. The usual panoply of bug fixes and minor improvements round out this release. As always I am grateful to all contributors, reviewers, and testers" * tag 'nfsd-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/cel/linux: (127 commits) svcrdma: Fix tracepoint printk format svcrdma: Drop connection after an RDMA Read error NFSD: clean up alloc_init_deleg() NFSD: Fix frame size warning in svc_export_parse() NFSD: Rewrite synopsis of nfsd_percpu_counters_init() nfsd: Clean up errors in nfs3proc.c nfsd: Clean up errors in nfs4state.c NFSD: Clean up errors in stats.c NFSD: simplify error paths in nfsd_svc() NFSD: Clean up nfsd4_encode_seek() NFSD: Clean up nfsd4_encode_offset_status() NFSD: Clean up nfsd4_encode_copy_notify() NFSD: Clean up nfsd4_encode_copy() NFSD: Clean up nfsd4_encode_test_stateid() NFSD: Clean up nfsd4_encode_exchange_id() NFSD: Clean up nfsd4_do_encode_secinfo() NFSD: Clean up nfsd4_encode_access() NFSD: Clean up nfsd4_encode_readdir() NFSD: Clean up nfsd4_encode_entry4() NFSD: Add an nfsd4_encode_nfs_cookie4() helper ...
2023-10-30Merge tag 'vfs-6.7.iov_iter' of ↵Linus Torvalds
gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs Pull iov_iter updates from Christian Brauner: "This contain's David's iov_iter cleanup work to convert the iov_iter iteration macros to inline functions: - Remove last_offset from iov_iter as it was only used by ITER_PIPE - Add a __user tag on copy_mc_to_user()'s dst argument on x86 to match that on powerpc and get rid of a sparse warning - Convert iter->user_backed to user_backed_iter() in the sound PCM driver - Convert iter->user_backed to user_backed_iter() in a couple of infiniband drivers - Renumber the type enum so that the ITER_* constants match the order in iterate_and_advance*() - Since the preceding patch puts UBUF and IOVEC at 0 and 1, change user_backed_iter() to just use the type value and get rid of the extra flag - Convert the iov_iter iteration macros to always-inline functions to make the code easier to follow. It uses function pointers, but they get optimised away - Move the check for ->copy_mc to _copy_from_iter() and copy_page_from_iter_atomic() rather than in memcpy_from_iter_mc() where it gets repeated for every segment. Instead, we check once and invoke a side function that can use iterate_bvec() rather than iterate_and_advance() and supply a different step function - Move the copy-and-csum code to net/ where it can be in proximity with the code that uses it - Fold memcpy_and_csum() in to its two users - Move csum_and_copy_from_iter_full() out of line and merge in csum_and_copy_from_iter() since the former is the only caller of the latter - Move hash_and_copy_to_iter() to net/ where it can be with its only caller" * tag 'vfs-6.7.iov_iter' of gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs: iov_iter, net: Move hash_and_copy_to_iter() to net/ iov_iter, net: Merge csum_and_copy_from_iter{,_full}() together iov_iter, net: Fold in csum_and_memcpy() iov_iter, net: Move csum_and_copy_to/from_iter() to net/ iov_iter: Don't deal with iter->copy_mc in memcpy_from_iter_mc() iov_iter: Convert iterate*() to inline funcs iov_iter: Derive user-backedness from the iterator type iov_iter: Renumber ITER_* constants infiniband: Use user_backed_iter() to see if iterator is UBUF/IOVEC sound: Fix snd_pcm_readv()/writev() to use iov access functions iov_iter, x86: Be consistent about the __user tag on copy_mc_to_user() iov_iter: Remove last_offset from iov_iter as it was for ITER_PIPE
2023-10-28seq_buf: Introduce DECLARE_SEQ_BUF and seq_buf_str()Kees Cook
Solve two ergonomic issues with struct seq_buf; 1) Too much boilerplate is required to initialize: struct seq_buf s; char buf[32]; seq_buf_init(s, buf, sizeof(buf)); Instead, we can build this directly on the stack. Provide DECLARE_SEQ_BUF() macro to do this: DECLARE_SEQ_BUF(s, 32); 2) %NUL termination is fragile and requires 2 steps to get a valid C String (and is a layering violation exposing the "internals" of seq_buf): seq_buf_terminate(s); do_something(s->buffer); Instead, we can just return s->buffer directly after terminating it in the refactored seq_buf_terminate(), now known as seq_buf_str(): do_something(seq_buf_str(s)); Link: https://lore.kernel.org/linux-trace-kernel/20231027155634.make.260-kees@kernel.org Link: https://lore.kernel.org/linux-trace-kernel/20231026194033.it.702-kees@kernel.org/ Cc: Yosry Ahmed <yosryahmed@google.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Justin Stitt <justinstitt@google.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Petr Mladek <pmladek@suse.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Yun Zhou <yun.zhou@windriver.com> Cc: Jacob Keller <jacob.e.keller@intel.com> Cc: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2023-10-27acpi: Move common tables helper functions to common libDave Jiang
Some of the routines in ACPI driver/acpi/tables.c can be shared with parsing CDAT. CDAT is a device-provided data structure that is formatted similar to a platform provided ACPI table. CDAT is used by CXL and can exist on platforms that do not use ACPI. Split out the common routine from ACPI to accommodate platforms that do not support ACPI and move that to /lib. The common routines can be built outside of ACPI if FIRMWARE_TABLES is selected. Link: https://lore.kernel.org/linux-cxl/CAJZ5v0jipbtTNnsA0-o5ozOk8ZgWnOg34m34a9pPenTyRLj=6A@mail.gmail.com/ Suggested-by: "Rafael J. Wysocki" <rafael@kernel.org> Reviewed-by: Hanjun Guo <guohanjun@huawei.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/r/169713683430.2205276.17899451119920103445.stgit@djiang5-mobl3 Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-26Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR. Conflicts: net/mac80211/rx.c 91535613b609 ("wifi: mac80211: don't drop all unprotected public action frames") 6c02fab72429 ("wifi: mac80211: split ieee80211_drop_unencrypted_mgmt() return value") Adjacent changes: drivers/net/ethernet/apm/xgene/xgene_enet_main.c 61471264c018 ("net: ethernet: apm: Convert to platform remove callback returning void") d2ca43f30611 ("net: xgene: Fix unused xgene_enet_of_match warning for !CONFIG_OF") net/vmw_vsock/virtio_transport.c 64c99d2d6ada ("vsock/virtio: support to send non-linear skb") 53b08c498515 ("vsock/virtio: initialize the_virtio_vsock before using VQs") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-24kprobes: unused header files removedwuqiang.matt
As kernel test robot reported, lib/test_objpool.c (trace:probes/for-next) has linux/version.h included, but version.h is not used at all. Then more unused headers are found in test_objpool.c and rethook.c, and all of them should be removed. Link: https://lore.kernel.org/all/20231023112245.6112-1-wuqiang.matt@bytedance.com/ Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202310191512.vvypKU5Z-lkp@intel.com/ Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2023-10-20tracing: Move readpos from seq_buf to trace_seqMatthew Wilcox (Oracle)
To make seq_buf more lightweight as a string buf, move the readpos member from seq_buf to its container, trace_seq. That puts the responsibility of maintaining the readpos entirely in the tracing code. If some future users want to package up the readpos with a seq_buf, we can define a new struct then. Link: https://lore.kernel.org/linux-trace-kernel/20231020033545.2587554-2-willy@infradead.org Cc: Kees Cook <keescook@chromium.org> Cc: Justin Stitt <justinstitt@google.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Petr Mladek <pmladek@suse.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2023-10-20netlink: add variable-length / auto integersJakub Kicinski
We currently push everyone to use padding to align 64b values in netlink. Un-padded nla_put_u64() doesn't even exist any more. The story behind this possibly start with this thread: https://lore.kernel.org/netdev/20121204.130914.1457976839967676240.davem@davemloft.net/ where DaveM was concerned about the alignment of a structure containing 64b stats. If user space tries to access such struct directly: struct some_stats *stats = nla_data(attr); printf("A: %llu", stats->a); lack of alignment may become problematic for some architectures. These days we most often put every single member in a separate attribute, meaning that the code above would use a helper like nla_get_u64(), which can deal with alignment internally. Even for arches which don't have good unaligned access - access aligned to 4B should be pretty efficient. Kernel and well known libraries deal with unaligned input already. Padded 64b is quite space-inefficient (64b + pad means at worst 16B per attr vs 32b which takes 8B). It is also more typing: if (nla_put_u64_pad(rsp, NETDEV_A_SOMETHING_SOMETHING, value, NETDEV_A_SOMETHING_PAD)) Create a new attribute type which will use 32 bits at netlink level if value is small enough (probably most of the time?), and (4B-aligned) 64 bits otherwise. Kernel API is just: if (nla_put_uint(rsp, NETDEV_A_SOMETHING_SOMETHING, value)) Calling this new type "just" sint / uint with no specific size will hopefully also make people more comfortable with using it. Currently telling people "don't use u8, you may need the bits, and netlink will round up to 4B, anyway" is the #1 comment we give to newcomers. In terms of netlink layout it looks like this: 0 4 8 12 16 32b: [nlattr][ u32 ] 64b: [ pad ][nlattr][ u64 ] uint(32) [nlattr][ u32 ] uint(64) [nlattr][ u64 ] Signed-off-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-19lib/generic-radix-tree.c: Add peek_prev()Kent Overstreet
This patch adds genradix_peek_prev(), genradix_iter_rewind(), and genradix_for_each_reverse(), for iterating backwards over a generic radix tree. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-19lib/generic-radix-tree.c: Don't overflow in peek()Kent Overstreet
When we started spreading new inode numbers throughout most of the 64 bit inode space, that triggered some corner case bugs, in particular some integer overflows related to the radix tree code. Oops. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-19closures: Add a missing includeKent Overstreet
Fixes building in userspace. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-19bcache: move closures to lib/Kent Overstreet
Prep work for bcachefs - being a fork of bcache it also uses closures Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> Acked-by: Coly Li <colyli@suse.de> Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
2023-10-18treewide: mark stuff as __ro_after_initAlexey Dobriyan
__read_mostly predates __ro_after_init. Many variables which are marked __read_mostly should have been __ro_after_init from day 1. Also, mark some stuff as "const" and "__init" while I'm at it. [akpm@linux-foundation.org: revert sysctl_nr_open_min, sysctl_nr_open_max changes due to arm warning] [akpm@linux-foundation.org: coding-style cleanups] Link: https://lkml.kernel.org/r/4f6bb9c0-abba-4ee4-a7aa-89265e886817@p183 Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-10-18percpu_counter: extend _limited_add() to negative amountsHugh Dickins
Though tmpfs does not need it, percpu_counter_limited_add() can be twice as useful if it works sensibly with negative amounts (subs) - typically decrements towards a limit of 0 or nearby: as suggested by Dave Chinner. And in the course of that reworking, skip the percpu counter sum if it is already obvious that the limit would be passed: as suggested by Tim Chen. Extend the comment above __percpu_counter_limited_add(), defining the behaviour with positive and negative amounts, allowing negative limits, but not bothering about overflow beyond S64_MAX. Link: https://lkml.kernel.org/r/8f86083b-c452-95d4-365b-f16a2e4ebcd4@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: Carlos Maiolino <cem@kernel.org> Cc: Christian Brauner <brauner@kernel.org> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Darrick J. Wong <djwong@kernel.org> Cc: Dave Chinner <dchinner@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Tim Chen <tim.c.chen@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-10-18shmem,percpu_counter: add _limited_add(fbc, limit, amount)Hugh Dickins
Percpu counter's compare and add are separate functions: without locking around them (which would defeat their purpose), it has been possible to overflow the intended limit. Imagine all the other CPUs fallocating tmpfs huge pages to the limit, in between this CPU's compare and its add. I have not seen reports of that happening; but tmpfs's recent addition of dquot_alloc_block_nodirty() in between the compare and the add makes it even more likely, and I'd be uncomfortable to leave it unfixed. Introduce percpu_counter_limited_add(fbc, limit, amount) to prevent it. I believe this implementation is correct, and slightly more efficient than the combination of compare and add (taking the lock once rather than twice when nearing full - the last 128MiB of a tmpfs volume on a machine with 128 CPUs and 4KiB pages); but it does beg for a better design - when nearing full, there is no new batching, but the costly percpu counter sum across CPUs still has to be done, while locked. Follow __percpu_counter_sum()'s example, including cpu_dying_mask as well as cpu_online_mask: but shouldn't __percpu_counter_compare() and __percpu_counter_limited_add() then be adding a num_dying_cpus() to num_online_cpus(), when they calculate the maximum which could be held across CPUs? But the times when it matters would be vanishingly rare. Link: https://lkml.kernel.org/r/bb817848-2d19-bcc8-39ca-ea179af0f0b4@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Tim Chen <tim.c.chen@intel.com> Cc: Dave Chinner <dchinner@redhat.com> Cc: Darrick J. Wong <djwong@kernel.org> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: Carlos Maiolino <cem@kernel.org> Cc: Christian Brauner <brauner@kernel.org> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-10-18maple_tree: add GFP_KERNEL to allocations in mas_expected_entries()Liam R. Howlett
Users complained about OOM errors during fork without triggering compaction. This can be fixed by modifying the flags used in mas_expected_entries() so that the compaction will be triggered in low memory situations. Since mas_expected_entries() is only used during fork, the extra argument does not need to be passed through. Additionally, the two test_maple_tree test cases and one benchmark test were altered to use the correct locking type so that allocations would not trigger sleeping and thus fail. Testing was completed with lockdep atomic sleep detection. The additional locking change requires rwsem support additions to the tools/ directory through the use of pthreads pthread_rwlock_t. With this change test_maple_tree works in userspace, as a module, and in-kernel. Users may notice that the system gave up early on attempting to start new processes instead of attempting to reclaim memory. Link: https://lkml.kernel.org/r/20230915093243epcms1p46fa00bbac1ab7b7dca94acb66c44c456@epcms1p4 Link: https://lkml.kernel.org/r/20231012155233.2272446-1-Liam.Howlett@oracle.com Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by: Peng Zhang <zhangpeng.00@bytedance.com> Cc: <jason.sim@samsung.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-10-18Merge tag 'qcom-drivers-for-6.7' of ↵Arnd Bergmann
https://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux into soc/drivers Qualcomm driver updates for v6.7 This introduces partial support for the Qualcomm Secure Execution Environment SCM interface, and uses this to implement EFI variable access on the Windows On Snapdragon devices (for now). The 32/64-bit calling convention detector of the SCM interface is updated to not choose 64-bit convention when Linux is 32-bit. The "extern" specifier is dropped from the interface include file. The LLCC driver gains support for carrying configuration for multiple different system/DDR configurations for a given platform, and selecting between them. Support for Q[DR]U1000 is added to the driver. All exported symbols are transitioned to EXPORT_SYMBOL_GPL(). The platform_drivers in the Qualcomm SoC are transitioned to the void-returning remove_new implementation. The rmtfs memory driver gains support for leaving guard pages around the used area, to avoid issues if the allocation happens to be placed adjacent to another protected memory region. The socinfo driver gains knowledge about IPQ8174, QCM6490, SM7150P and various PMICs used together with SM8550. * tag 'qcom-drivers-for-6.7' of https://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux: (44 commits) soc: qcom: socinfo: Convert to platform remove callback returning void soc: qcom: smsm: Convert to platform remove callback returning void soc: qcom: smp2p: Convert to platform remove callback returning void soc: qcom: smem: Convert to platform remove callback returning void soc: qcom: rmtfs_mem: Convert to platform remove callback returning void soc: qcom: qcom_stats: Convert to platform remove callback returning void soc: qcom: qcom_gsbi: Convert to platform remove callback returning void soc: qcom: qcom_aoss: Convert to platform remove callback returning void soc: qcom: pmic_glink: Convert to platform remove callback returning void soc: qcom: ocmem: Convert to platform remove callback returning void soc: qcom: llcc-qcom: Convert to platform remove callback returning void soc: qcom: icc-bwmon: Convert to platform remove callback returning void firmware: qcom_scm: use 64-bit calling convention only when client is 64-bit soc: qcom: llcc: Handle a second device without data corruption soc: qcom: Switch to EXPORT_SYMBOL_GPL() soc: qcom: smem: Annotate struct qcom_smem with __counted_by soc: qcom: rmtfs: Support discarding guard pages dt-bindings: reserved-memory: rmtfs: Allow guard pages dt-bindings: firmware: qcom,scm: document IPQ5018 compatible firmware: qcom_scm: disable SDI if required ... Link: https://lore.kernel.org/r/20231015204014.855672-1-andersson@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2023-10-18lib: objpool test module addedwuqiang.matt
The test_objpool module (test_objpool) will run several testcases for objpool stress and performance evaluation. Each testcase will have all available cpu cores involved to create a situation of high parallel and high contention. As of now there are 5 groups and 5 * 2 testcases in total: 1) group 1: synchronous mode objpool is managed synchronously, that is, all objects are to be reclaimed before objpool finalization and the objpool owner makes sure of it. All threads on different cores run in the same pace 2) group 2: synchronous mode + hrtimer this case have 2 customers: normal threads and hrtimer softirqs 3) group 3: synchronous + overrun mode This test group is mainly for performance evaluation of missing cases when pre-allocated objects are less than the requested 4) group 4: asynchronous mode This case is just an emulation of kretprobe, with refcount used to control the objpool lifecycle 5) group 5: asynchronous mode with hrtimer hrtimer softirq is introduced to stress async objpool operations Link: https://lore.kernel.org/all/20231017135654.82270-3-wuqiang.matt@bytedance.com/ Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2023-10-18lib: objpool added: ring-array based lockless MPMCwuqiang.matt
objpool is a scalable implementation of high performance queue for object allocation and reclamation, such as kretprobe instances. With leveraging percpu ring-array to mitigate hot spots of memory contention, it delivers near-linear scalability for high parallel scenarios. The objpool is best suited for the following cases: 1) Memory allocation or reclamation are prohibited or too expensive 2) Consumers are of different priorities, such as irqs and threads Limitations: 1) Maximum objects (capacity) is fixed after objpool creation 2) All pre-allocated objects are managed in percpu ring array, which consumes more memory than linked lists Link: https://lore.kernel.org/all/20231017135654.82270-2-wuqiang.matt@bytedance.com/ Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2023-10-16bitmap: move bitmap_*_region() functions to bitmap.hYury Norov
Now that bitmap_*_region() functions are implemented as thin wrappers around others, it's worth to move them to the header, as it opens room for compile-time optimizations. CC: Andy Shevchenko <andriy.shevchenko@linux.intel.com> CC: Rasmus Villemoes <linux@rasmusvillemoes.dk> CC: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-16lib: add light-weight queuing mechanism.NeilBrown
lwq is a FIFO single-linked queue that only requires a spinlock for dequeueing, which happens in process context. Enqueueing is atomic with no spinlock and can happen in any context. This is particularly useful when work items are queued from BH or IRQ context, and when they are handled one at a time by dedicated threads. Avoiding any locking when enqueueing means there is no need to disable BH or interrupts, which is generally best avoided (particularly when there are any RT tasks on the machine). This solution is superior to using "list_head" links because we need half as many pointers in the data structures, and because list_head lists would need locking to add items to the queue. This solution is superior to a bespoke solution as all locking and container_of casting is integrated, so the interface is simple. Despite the similar name, this solution meets a distinctly different need to kfifo. kfifo provides a fixed sized circular buffer to which data can be added at one end and removed at the other, and does not provide any locking. lwq does not have any size limit and works with data structures (objects?) rather than data (bytes). A unit test for basic functionality, which runs at boot time, is included. Signed-off-by: NeilBrown <neilb@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com> Cc: Kees Cook <keescook@chromium.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: David Gow <davidgow@google.com> Cc: linux-kernel@vger.kernel.org Message-Id: <20230911111333.4d1a872330e924a00acb905b@linux-foundation.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-16llist: add llist_del_first_this()NeilBrown
llist_del_first_this() deletes a specific entry from an llist, providing it is at the head of the list. Multiple threads can call this concurrently providing they each offer a different entry. This can be uses for a set of worker threads which are on the llist when they are idle. The head can always be woken, and when it is woken it can remove itself, and possibly wake the next if there is an excess of work to do. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2023-10-14bitmap: drop _reg_op() functionYury Norov
Now that all _reg_op() users are switched to alternative functions, _reg_op() machinery is not needed anymore. CC: Andy Shevchenko <andriy.shevchenko@linux.intel.com> CC: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-14bitmap: replace _reg_op(REG_OP_ISFREE) with find_next_bit()Yury Norov
_reg_op(REG_OP_ISFREE) can be trivially replaced with find_next_bit(). Doing that opens room for potential small_const_nbits() optimization. CC: Andy Shevchenko <andriy.shevchenko@linux.intel.com> CC: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-14bitmap: replace _reg_op(REG_OP_RELEASE) with bitmap_clear()Yury Norov
_reg_op(REG_OP_RELEASE) duplicates bitmap_clear(). CC: Andy Shevchenko <andriy.shevchenko@linux.intel.com> CC: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-14bitmap: replace _reg_op(REG_OP_ALLOC) with bitmap_set()Yury Norov
_reg_op(REG_OP_ALLOC) duplicates bitmap_set(). CC: Andy Shevchenko <andriy.shevchenko@linux.intel.com> CC: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-14bitmap: fix opencoded bitmap_allocate_region()Yury Norov
bitmap_find_region() opencodes bitmap_allocate_region(). CC: Andy Shevchenko <andriy.shevchenko@linux.intel.com> CC: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-14bitmap: add test for bitmap_*_region() functionsYury Norov
Test basic functionality of bitmap_{allocate,release,find_free}_region() functions. CC: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-14bitmap: align __reg_op() wrappers with modern coding styleYury Norov
Fix comments so that scripts/kernel-doc doesn't warn, and fix for-loop stype in bitmap_find_free_region(). CC: Rasmus Villemoes <linux@rasmusvillemoes.dk> Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-14lib/bitmap: split-out string-related operations to a separate filesYury Norov
lib/bitmap.c and corresponding include/linux/bitmap.h are intended to hold functions related to operations on bitmaps, like bitmap_shift or bitmap_set. Historically, some string-related operations like bitmap_parse are also reside in lib/bitmap.c. Now that the subsystem evolves, string-related bitmap operations became a significant part of the file. Because they are quite different from the other bitmap functions by nature, it's worth to split them to a separate source/header files. CC: Andrew Morton <akpm@linux-foundation.org> CC: Andy Shevchenko <andriy.shevchenko@linux.intel.com> CC: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-14bitmap: Remove dead code, i.e. bitmap_copy_le()Andy Shevchenko
Besides the fact it's not used anywhere it should be implemented differently, i.e. via helpers from linux/byteorder/generic.h. Yet the helpers themselves need to be introduced first. Also note, the function lacks of the test cases, they must be provided. Hence, drop the current dead code for good. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-14bitmap: Fix a typo ("identify map")Jonathan Neuschäfer
A map in which each element is mapped to itself is called an "identity map". Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-14cpumask: kernel-doc cleanups and additionsRandy Dunlap
Clean up some punctutation and abbreviations. Add kernel-doc notation for one function and function return value for 39 functions. cpumask.h: Fix some punctuation (plural vs. possessive). Fix some abbreviations (ie. -> i.e., id -> ID). Fix 35 warnings like this: include/linux/cpumask.h:161: warning: No description found for return value of 'cpumask_first' cpumask.c: Add Return: value for 4 functions. Add kernel-doc for cpumask_any_distribute(). Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Yury Norov <yury.norov@gmail.com>
2023-10-10locking/atomics: Use atomic_try_cmpxchg_release() to micro-optimize ↵Uros Bizjak
rcuref_put_slowpath() Use atomic_try_cmpxchg() instead of atomic_cmpxchg(*ptr, old, new) == old in rcuref_put_slowpath(). On x86 the CMPXCHG instruction returns success in the ZF flag, so this change saves a compare after CMPXCHG. Additionaly, the compiler reorders some code blocks to follow likely/unlikely annotations in the atomic_try_cmpxchg() macro, improving the code from: 9a: f0 0f b1 0b lock cmpxchg %ecx,(%rbx) 9e: 83 f8 ff cmp $0xffffffff,%eax a1: 74 04 je a7 <rcuref_put_slowpath+0x27> a3: 31 c0 xor %eax,%eax to: 9a: f0 0f b1 0b lock cmpxchg %ecx,(%rbx) 9e: 75 4c jne ec <rcuref_put_slowpath+0x6c> a0: b0 01 mov $0x1,%al No functional change intended. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paul E. McKenney <paulmck@kernel.org> Link: https://lore.kernel.org/r/20230509150255.3691-1-ubizjak@gmail.com
2023-10-09iov_iter, net: Move hash_and_copy_to_iter() to net/David Howells
Move hash_and_copy_to_iter() to be with its only caller in networking code. Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20230925120309.1731676-13-dhowells@redhat.com cc: Alexander Viro <viro@zeniv.linux.org.uk> cc: Jens Axboe <axboe@kernel.dk> cc: Christoph Hellwig <hch@lst.de> cc: Christian Brauner <christian@brauner.io> cc: Matthew Wilcox <willy@infradead.org> cc: Linus Torvalds <torvalds@linux-foundation.org> cc: David Laight <David.Laight@ACULAB.COM> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-09iov_iter, net: Move csum_and_copy_to/from_iter() to net/David Howells
Move csum_and_copy_to/from_iter() to net code now that the iteration framework can be #included. Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20230925120309.1731676-10-dhowells@redhat.com cc: Alexander Viro <viro@zeniv.linux.org.uk> cc: Jens Axboe <axboe@kernel.dk> cc: Christoph Hellwig <hch@lst.de> cc: Christian Brauner <christian@brauner.io> cc: Matthew Wilcox <willy@infradead.org> cc: Linus Torvalds <torvalds@linux-foundation.org> cc: David Laight <David.Laight@ACULAB.COM> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-09iov_iter: Don't deal with iter->copy_mc in memcpy_from_iter_mc()David Howells
iter->copy_mc is only used with a bvec iterator and only by dump_emit_page() in fs/coredump.c so rather than handle this in memcpy_from_iter_mc() where it is checked repeatedly by _copy_from_iter() and copy_page_from_iter_atomic(), Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20230925120309.1731676-9-dhowells@redhat.com cc: Alexander Viro <viro@zeniv.linux.org.uk> cc: Jens Axboe <axboe@kernel.dk> cc: Christoph Hellwig <hch@lst.de> cc: Christian Brauner <christian@brauner.io> cc: Matthew Wilcox <willy@infradead.org> cc: Linus Torvalds <torvalds@linux-foundation.org> cc: David Laight <David.Laight@ACULAB.COM> cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-07Merge branch 'sched/urgent' into sched/core, to pick up fixes and refresh ↵Ingo Molnar
the branch Signed-off-by: Ingo Molnar <mingo@kernel.org>
2023-10-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR. No conflicts (or adjacent changes of note). Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-01powerpc: Use shared font dataDr. David Alan Gilbert
PowerPC has a 'btext' font used for the console which is almost identical to the shared font_sun8x16, so use it rather than duplicating the data. They were actually identical until about a decade ago when commit bcfbeecea11c ("drivers: console: font_: Change a glyph from "broken bar" to "vertical line"") which changed the | in the shared font to be a solid bar rather than a broken bar. That's the only difference. This was originally spotted by the PMF source code analyser, which noticed that sparc does the same thing with the same data, and they also share a bunch of functions to manipulate the data. I've previously posted a near identical patch for sparc. Tested very lightly with a boot without FS in qemu. Signed-off-by: "Dr. David Alan Gilbert" <linux@treblig.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230825142754.1487900-1-linux@treblig.org
2023-09-29maple_tree: add MAS_UNDERFLOW and MAS_OVERFLOW statesLiam R. Howlett
When updating the maple tree iterator to avoid rewalks, an issue was introduced when shifting beyond the limits. This can be seen by trying to go to the previous address of 0, which would set the maple node to MAS_NONE and keep the range as the last entry. Subsequent calls to mas_find() would then search upwards from mas->last and skip the value at mas->index/mas->last. This showed up as a bug in mprotect which skips the actual VMA at the current range after attempting to go to the previous VMA from 0. Since MAS_NONE may already be set when searching for a value that isn't contained within a node, changing the handling of MAS_NONE in mas_find() would make the code more complicated and error prone. Furthermore, there was no way to tell which limit was hit, and thus which action to take (next or the entry at the current range). This solution is to add two states to track what happened with the previous iterator action. This allows for the expected behaviour of the next command to return the correct item (either the item at the range requested, or the next/previous). Tests are also added and updated accordingly. Link: https://lkml.kernel.org/r/20230921181236.509072-3-Liam.Howlett@oracle.com Link: https://gist.github.com/heatd/85d2971fae1501b55b6ea401fbbe485b Link: https://lore.kernel.org/linux-mm/20230921181236.509072-1-Liam.Howlett@oracle.com/ Fixes: 39193685d585 ("maple_tree: try harder to keep active node with mas_prev()") Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reported-by: Pedro Falcato <pedro.falcato@gmail.com> Closes: https://gist.github.com/heatd/85d2971fae1501b55b6ea401fbbe485b Closes: https://bugs.archlinux.org/task/79656 Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-09-28kunit: test: Fix the possible memory leak in executor_testJinjie Ruan
When CONFIG_KUNIT_ALL_TESTS=y, making CONFIG_DEBUG_KMEMLEAK=y and CONFIG_DEBUG_KMEMLEAK_AUTO_SCAN=y, the below memory leak is detected. If kunit_filter_suites() succeeds, not only copy but also filtered_suite and filtered_suite->test_cases should be freed. So as Rae suggested, to avoid the suite set never be freed when KUNIT_ASSERT_EQ() fails and exits after kunit_filter_suites() succeeds, update kfree_at_end() func to free_suite_set_at_end() to use kunit_free_suite_set() to free them as kunit_module_exit() and kunit_run_all_tests() do it. As the second arg got of free_suite_set_at_end() is a local variable, copy it for free to avoid wild-memory-access. After applying this patch, the following memory leak is never detected. unreferenced object 0xffff8881001de400 (size 1024): comm "kunit_try_catch", pid 1396, jiffies 4294720452 (age 932.801s) hex dump (first 32 bytes): 73 75 69 74 65 32 00 00 00 00 00 00 00 00 00 00 suite2.......... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff817db753>] __kmalloc_node_track_caller+0x53/0x150 [<ffffffff817bd242>] kmemdup+0x22/0x50 [<ffffffff829e961d>] kunit_filter_suites+0x44d/0xcc0 [<ffffffff829eb69f>] filter_suites_test+0x12f/0x360 [<ffffffff829e802a>] kunit_generic_run_threadfn_adapter+0x4a/0x90 [<ffffffff81236fc6>] kthread+0x2b6/0x380 [<ffffffff81096afd>] ret_from_fork+0x2d/0x70 [<ffffffff81003511>] ret_from_fork_asm+0x11/0x20 unreferenced object 0xffff8881052cd388 (size 192): comm "kunit_try_catch", pid 1396, jiffies 4294720452 (age 932.801s) hex dump (first 32 bytes): a0 85 9e 82 ff ff ff ff 80 cd 7c 84 ff ff ff ff ..........|..... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff817dbad2>] __kmalloc+0x52/0x150 [<ffffffff829e9651>] kunit_filter_suites+0x481/0xcc0 [<ffffffff829eb69f>] filter_suites_test+0x12f/0x360 [<ffffffff829e802a>] kunit_generic_run_threadfn_adapter+0x4a/0x90 [<ffffffff81236fc6>] kthread+0x2b6/0x380 [<ffffffff81096afd>] ret_from_fork+0x2d/0x70 [<ffffffff81003511>] ret_from_fork_asm+0x11/0x20 unreferenced object 0xffff888100da8400 (size 1024): comm "kunit_try_catch", pid 1398, jiffies 4294720454 (age 781.945s) hex dump (first 32 bytes): 73 75 69 74 65 32 00 00 00 00 00 00 00 00 00 00 suite2.......... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff817db753>] __kmalloc_node_track_caller+0x53/0x150 [<ffffffff817bd242>] kmemdup+0x22/0x50 [<ffffffff829e961d>] kunit_filter_suites+0x44d/0xcc0 [<ffffffff829eb13f>] filter_suites_test_glob_test+0x12f/0x560 [<ffffffff829e802a>] kunit_generic_run_threadfn_adapter+0x4a/0x90 [<ffffffff81236fc6>] kthread+0x2b6/0x380 [<ffffffff81096afd>] ret_from_fork+0x2d/0x70 [<ffffffff81003511>] ret_from_fork_asm+0x11/0x20 unreferenced object 0xffff888105117878 (size 96): comm "kunit_try_catch", pid 1398, jiffies 4294720454 (age 781.945s) hex dump (first 32 bytes): a0 85 9e 82 ff ff ff ff a0 ac 7c 84 ff ff ff ff ..........|..... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff817dbad2>] __kmalloc+0x52/0x150 [<ffffffff829e9651>] kunit_filter_suites+0x481/0xcc0 [<ffffffff829eb13f>] filter_suites_test_glob_test+0x12f/0x560 [<ffffffff829e802a>] kunit_generic_run_threadfn_adapter+0x4a/0x90 [<ffffffff81236fc6>] kthread+0x2b6/0x380 [<ffffffff81096afd>] ret_from_fork+0x2d/0x70 [<ffffffff81003511>] ret_from_fork_asm+0x11/0x20 unreferenced object 0xffff888102c31c00 (size 1024): comm "kunit_try_catch", pid 1404, jiffies 4294720460 (age 781.948s) hex dump (first 32 bytes): 6e 6f 72 6d 61 6c 5f 73 75 69 74 65 00 00 00 00 normal_suite.... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff817db753>] __kmalloc_node_track_caller+0x53/0x150 [<ffffffff817bd242>] kmemdup+0x22/0x50 [<ffffffff829ecf17>] kunit_filter_attr_tests+0xf7/0x860 [<ffffffff829e99ff>] kunit_filter_suites+0x82f/0xcc0 [<ffffffff829ea975>] filter_attr_test+0x195/0x5f0 [<ffffffff829e802a>] kunit_generic_run_threadfn_adapter+0x4a/0x90 [<ffffffff81236fc6>] kthread+0x2b6/0x380 [<ffffffff81096afd>] ret_from_fork+0x2d/0x70 [<ffffffff81003511>] ret_from_fork_asm+0x11/0x20 unreferenced object 0xffff8881052cd250 (size 192): comm "kunit_try_catch", pid 1404, jiffies 4294720460 (age 781.948s) hex dump (first 32 bytes): a0 85 9e 82 ff ff ff ff 00 a9 7c 84 ff ff ff ff ..........|..... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff817dbad2>] __kmalloc+0x52/0x150 [<ffffffff829ecfc1>] kunit_filter_attr_tests+0x1a1/0x860 [<ffffffff829e99ff>] kunit_filter_suites+0x82f/0xcc0 [<ffffffff829ea975>] filter_attr_test+0x195/0x5f0 [<ffffffff829e802a>] kunit_generic_run_threadfn_adapter+0x4a/0x90 [<ffffffff81236fc6>] kthread+0x2b6/0x380 [<ffffffff81096afd>] ret_from_fork+0x2d/0x70 [<ffffffff81003511>] ret_from_fork_asm+0x11/0x20 unreferenced object 0xffff888104f4e400 (size 1024): comm "kunit_try_catch", pid 1408, jiffies 4294720464 (age 781.944s) hex dump (first 32 bytes): 73 75 69 74 65 00 00 00 00 00 00 00 00 00 00 00 suite........... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff817db753>] __kmalloc_node_track_caller+0x53/0x150 [<ffffffff817bd242>] kmemdup+0x22/0x50 [<ffffffff829ecf17>] kunit_filter_attr_tests+0xf7/0x860 [<ffffffff829e99ff>] kunit_filter_suites+0x82f/0xcc0 [<ffffffff829e9fc3>] filter_attr_skip_test+0x133/0x6e0 [<ffffffff829e802a>] kunit_generic_run_threadfn_adapter+0x4a/0x90 [<ffffffff81236fc6>] kthread+0x2b6/0x380 [<ffffffff81096afd>] ret_from_fork+0x2d/0x70 [<ffffffff81003511>] ret_from_fork_asm+0x11/0x20 unreferenced object 0xffff8881052cc620 (size 192): comm "kunit_try_catch", pid 1408, jiffies 4294720464 (age 781.944s) hex dump (first 32 bytes): a0 85 9e 82 ff ff ff ff c0 a8 7c 84 ff ff ff ff ..........|..... 00 00 00 00 00 00 00 00 02 00 00 00 02 00 00 00 ................ backtrace: [<ffffffff817dbad2>] __kmalloc+0x52/0x150 [<ffffffff829ecfc1>] kunit_filter_attr_tests+0x1a1/0x860 [<ffffffff829e99ff>] kunit_filter_suites+0x82f/0xcc0 [<ffffffff829e9fc3>] filter_attr_skip_test+0x133/0x6e0 [<ffffffff829e802a>] kunit_generic_run_threadfn_adapter+0x4a/0x90 [<ffffffff81236fc6>] kthread+0x2b6/0x380 [<ffffffff81096afd>] ret_from_fork+0x2d/0x70 [<ffffffff81003511>] ret_from_fork_asm+0x11/0x20 Fixes: e5857d396f35 ("kunit: flatten kunit_suite*** to kunit_suite** in .kunit_test_suites") Fixes: 76066f93f1df ("kunit: add tests for filtering attributes") Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Suggested-by: Rae Moar <rmoar@google.com> Reviewed-by: David Gow <davidgow@google.com> Suggested-by: David Gow <davidgow@google.com> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202309142251.uJ8saAZv-lkp@intel.com/ Closes: https://lore.kernel.org/oe-kbuild-all/202309270433.wGmFRGjd-lkp@intel.com/ Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2023-09-28kunit: Fix possible memory leak in kunit_filter_suites()Jinjie Ruan
If the outer layer for loop is iterated more than once and it fails not in the first iteration, the filtered_suite and filtered_suite->test_cases allocated in the last kunit_filter_attr_tests() in last inner for loop is leaked. So add a new free_filtered_suite err label and free the filtered_suite and filtered_suite->test_cases so far. And change kmalloc_array of copy to kcalloc to Clear the copy to make the kfree safe. Fixes: 529534e8cba3 ("kunit: Add ability to filter attributes") Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Reviewed-by: Rae Moar <rmoar@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2023-09-28kunit: Fix the wrong kfree of copy for kunit_filter_suites()Jinjie Ruan
If the outer layer for loop is iterated more than once and it fails not in the first iteration, the copy pointer has been moved. So it should free the original copy's backup copy_start. Fixes: abbf73816b6f ("kunit: fix possible memory leak in kunit_filter_suites()") Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Reviewed-by: Rae Moar <rmoar@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2023-09-28kunit: Fix missed memory release in kunit_free_suite_set()Jinjie Ruan
modprobe cpumask_kunit and rmmod cpumask_kunit, kmemleak detect a suspected memory leak as below. If kunit_filter_suites() in kunit_module_init() succeeds, the suite_set.start will not be NULL and the kunit_free_suite_set() in kunit_module_exit() should free all the memory which has not been freed. However the test_cases in suites is left out. unreferenced object 0xffff54ac47e83200 (size 512): comm "modprobe", pid 592, jiffies 4294913238 (age 1367.612s) hex dump (first 32 bytes): 84 13 1a f0 d3 b6 ff ff 30 68 1a f0 d3 b6 ff ff ........0h...... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<000000008dec63a2>] slab_post_alloc_hook+0xb8/0x368 [<00000000ec280d8e>] __kmem_cache_alloc_node+0x174/0x290 [<00000000896c7740>] __kmalloc+0x60/0x2c0 [<000000007a50fa06>] kunit_filter_suites+0x254/0x5b8 [<0000000078cc98e2>] kunit_module_notify+0xf4/0x240 [<0000000033cea952>] notifier_call_chain+0x98/0x17c [<00000000973d05cc>] notifier_call_chain_robust+0x4c/0xa4 [<000000005f95895f>] blocking_notifier_call_chain_robust+0x4c/0x74 [<0000000048e36fa7>] load_module+0x1a2c/0x1c40 [<0000000004eb8a91>] init_module_from_file+0x94/0xcc [<0000000037dbba28>] idempotent_init_module+0x184/0x278 [<00000000161b75cb>] __arm64_sys_finit_module+0x68/0xa8 [<000000006dc1669b>] invoke_syscall+0x44/0x100 [<00000000fa87e304>] el0_svc_common.constprop.1+0x68/0xe0 [<000000009d8ad866>] do_el0_svc+0x1c/0x28 [<000000005b83c607>] el0_svc+0x3c/0xc4 Fixes: a127b154a8f2 ("kunit: tool: allow filtering test cases via glob") Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Reviewed-by: Rae Moar <rmoar@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>