summaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)Author
2025-05-09tracing: Use atomic_inc_return() for updating "disabled" counter in irqsoff ↵Steven Rostedt
tracer The irqsoff tracer uses the per CPU "disabled" field to prevent corruption of the accounting when it starts to trace interrupts disabled, but there's a slight race that could happen if for some reason it was called twice. Use atomic_inc_return() instead. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/20250505212236.567884756@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-09tracing: Convert the per CPU "disabled" counter to local from atomicSteven Rostedt
The per CPU "disabled" counter is used for the latency tracers and stack tracers to make sure that their accounting isn't messed up by an NMI or interrupt coming in and affecting the same CPU data. But the counter is an atomic_t type. As it only needs to synchronize against the current CPU, switch it over to local_t type. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/20250505212236.394925376@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-09tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" fieldSteven Rostedt
The branch tracer currently checks the per CPU "disabled" field to know if tracing is enabled or not for the CPU. As the "disabled" value is not used anymore to turn of tracing generically, use tracing_tracer_is_on_cpu() instead. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/20250505212236.224658526@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-09ring-buffer: Add ring_buffer_record_is_on_cpu()Steven Rostedt
Add the function ring_buffer_record_is_on_cpu() that returns true if the ring buffer for a give CPU is writable and false otherwise. Also add tracer_tracing_is_on_cpu() to return if the ring buffer for a given CPU is writeable for a given trace_array. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/20250505212236.059853898@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-09tracing: Do not use per CPU array_buffer.data->disabled for cpumaskSteven Rostedt
The per CPU "disabled" value was the original way to disable tracing when the tracing subsystem was first created. Today, the ring buffer infrastructure has its own way to disable tracing. In fact, things have changed so much since 2008 that many things ignore the disable flag. Do not bother setting the per CPU disabled flag of the array_buffer data to use to determine what CPUs can write to the buffer and only rely on the ring buffer code itself to disabled it. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/20250505212235.885452497@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-09ftrace: Do not disabled function graph based on "disabled" fieldSteven Rostedt
The per CPU "disabled" value was the original way to disable tracing when the tracing subsystem was first created. Today, the ring buffer infrastructure has its own way to disable tracing. In fact, things have changed so much since 2008 that many things ignore the disable flag. Do not bother disabling the function graph tracer if the per CPU disabled field is set. Just record as normal. If tracing is disabled in the ring buffer it will not be recorded. Also, when tracing is enabled again, it will not drop the return call of the function. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/20250505212235.715752008@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-09tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabledSteven Rostedt
The per CPU "disabled" value was the original way to disable tracing when the tracing subsystem was first created. Today, the ring buffer infrastructure has its own way to disable tracing. In fact, things have changed so much since 2008 that many things ignore the disable flag. The kdb_ftdump() function iterates over all the current tracing CPUs and increments the "disabled" counter before doing the dump, and decrements it afterward. As the disabled flag can be ignored, doing this today is not reliable. Instead, simply call tracer_tracing_off() and then tracer_tracing_on() to disable and then enabled the entire ring buffer in one go! Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Daniel Thompson <danielt@kernel.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/20250505212235.549033722@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-09tracing: Use tracer_tracing_disable() instead of "disabled" field for ↵Steven Rostedt
ftrace_dump_one() The per CPU "disabled" value was the original way to disable tracing when the tracing subsystem was first created. Today, the ring buffer infrastructure has its own way to disable tracing. In fact, things have changed so much since 2008 that many things ignore the disable flag. The ftrace_dump_one() function iterates over all the current tracing CPUs and increments the "disabled" counter before doing the dump, and decrements it afterward. As the disabled flag can be ignored, doing this today is not reliable. Instead use the new tracer_tracing_disable() that calls into the ring buffer code to do the disabling. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/20250505212235.381188238@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-09tracing: Add tracer_tracing_disable/enable() functionsSteven Rostedt
Allow a tracer to disable writing to its buffer for a temporary amount of time and re-enable it. The tracer_tracing_disable() will disable writing to the trace array buffer, and requires a tracer_tracing_enable() to re-enable it. The difference between tracer_tracing_disable() and tracer_tracing_off() is that the disable version can nest, and requires as many enable() calls as disable() calls to re-enable the buffer. Where as the off() function can be called multiple times and only requires a singe tracer_tracing_on() to re-enable the buffer. Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Daniel Thompson <danielt@kernel.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/20250505212235.210330010@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-09genirq: Fix inverted condition in handle_nested_irq()Thomas Gleixner
Marek reported that the rework of handle_nested_irq() introduced a inverted condition, which prevents handling of interrupts. Fix it up. Fixes: 2ef2e13094c7 ("genirq/chip: Rework handle_nested_irq()") Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Closes: https://lore.kernel/org/all/46ed4040-ca11-4157-8bd7-13c04c113734@samsung.com
2025-05-09sched_ext: Remove bpf_scx_get_func_protoFeng Yang
task_storage_{get,delete} has been moved to bpf_base_func_proto. Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com> Signed-off-by: Feng Yang <yangfeng@kylinos.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/bpf/20250506061434.94277-3-yangfeng59949@163.com
2025-05-09bpf: Allow some trace helpers for all prog typesFeng Yang
if it works under NMI and doesn't use any context-dependent things, should be fine for any program type. The detailed discussion is in [1]. [1] https://lore.kernel.org/all/CAEf4Bza6gK3dsrTosk6k3oZgtHesNDSrDd8sdeQ-GiS6oJixQg@mail.gmail.com/ Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com> Signed-off-by: Feng Yang <yangfeng@kylinos.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/bpf/20250506061434.94277-2-yangfeng59949@163.com
2025-05-09cgroup/cpuset: Extend kthread_is_per_cpu() check to all PF_NO_SETAFFINITY tasksWaiman Long
Commit ec5fbdfb99d1 ("cgroup/cpuset: Enable update_tasks_cpumask() on top_cpuset") enabled us to pull CPUs dedicated to child partitions from tasks in top_cpuset by ignoring per cpu kthreads. However, there can be other kthreads that are not per cpu but have PF_NO_SETAFFINITY flag set to indicate that we shouldn't mess with their CPU affinity. For other kthreads, their affinity will be changed to skip CPUs dedicated to child partitions whether it is an isolating or a scheduling one. As all the per cpu kthreads have PF_NO_SETAFFINITY set, the PF_NO_SETAFFINITY tasks are essentially a superset of per cpu kthreads. Fix this issue by dropping the kthread_is_per_cpu() check and checking the PF_NO_SETAFFINITY flag instead. Fixes: ec5fbdfb99d1 ("cgroup/cpuset: Enable update_tasks_cpumask() on top_cpuset") Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-05-09cgroup/cpuset: drop useless cpumask_empty() in ↵Yury Norov
compute_effective_exclusive_cpumask() Empty cpumasks can't intersect with any others. Therefore, testing for non-emptyness is useless. Signed-off-by: Yury Norov <yury.norov@gmail.com> Reviewed-by: Waiman Long <longman@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-05-09bpf/verifier: Handle BPF_LOAD_ACQ instructions in insn_def_regno()Peilin Ye
In preparation for supporting BPF load-acquire and store-release instructions for architectures where bpf_jit_needs_zext() returns true (e.g. riscv64), make insn_def_regno() handle load-acquires properly. Acked-by: Björn Töpel <bjorn@kernel.org> Tested-by: Björn Töpel <bjorn@rivosinc.com> # QEMU/RVA23 Signed-off-by: Peilin Ye <yepeilin@google.com> Reviewed-by: Pu Lehui <pulehui@huawei.com> Link: https://lore.kernel.org/r/09cb2aec979aaed9d16db41f0f5b364de39377c0.1746588351.git.yepeilin@google.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-05-09PM: hibernate: add configurable delay for pm_testZihuan Zhang
Turn the default 5 second test delay for hibernation into a configurable module parameter, so users can determine how long to wait in this pseudo-hibernate state before resuming the system. The configurable delay parameter has been added for suspend, so add an analogous one for hibernation. Example (wait 30 seconds); # echo 30 > /sys/module/hibernate/parameters/pm_test_delay # echo core > /sys/power/pm_test Signed-off-by: Zihuan Zhang <zhangzihuan@kylinos.cn> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Link: https://patch.msgid.link/20250507063520.419635-1-zhangzihuan@kylinos.cn [ rjw: Subject and changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-05-09PM: wakeup: Delete space in the end of string shown by pm_show_wakelocks()Zijun Hu
pm_show_wakelocks() is called to generate a string when showing attributes /sys/power/wake_(lock|unlock), but the string ends with an unwanted space that was added back by mistake by commit c9d967b2ce40 ("PM: wakeup: simplify the output logic of pm_show_wakelocks()"). Remove the unwanted space. Fixes: c9d967b2ce40 ("PM: wakeup: simplify the output logic of pm_show_wakelocks()") Signed-off-by: Zijun Hu <quic_zijuhu@quicinc.com> Link: https://patch.msgid.link/20250505-fix_power-v1-1-0f7f2c2f338c@quicinc.com [ rjw: Changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2025-05-09kernel/nsproxy: remove unnecessary guardsJoel Savitz
In free_nsproxy() and the error path of create_new_namesapces() the put_*_ns() calls are guarded by unnecessary NULL checks. put_pid_ns(), put_ipc_ns(), put_uts_ns(), and put_time_ns() will never receive a NULL argument unless their namespace type is disabled, and in this case all four become no-ops at compile time anyway. put_mnt_ns() will never receive a null argument at any time. This unguarded usage is in line with other call sites of put_*_ns(). Signed-off-by: Joel Savitz <jsavitz@redhat.com> Link: https://lore.kernel.org/20250508184930.183040-2-jsavitz@redhat.com Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-05-09power: freeze filesystems during suspend/resumeChristian Brauner
Now all the pieces are in place to actually allow the power subsystem to freeze/thaw filesystems during suspend/resume. Filesystems are only frozen and thawed if the power subsystem does actually own the freeze. We could bubble up errors and fail suspend/resume if the error isn't EBUSY (aka it's already frozen) but I don't think that this is worth it. Filesystem freezing during suspend/resume is best-effort. If the user has 500 ext4 filesystems mounted and 4 fail to freeze for whatever reason then we simply skip them. What we have now is already a big improvement and let's see how we fare with it before making our lives even harder (and uglier) than we have to. We add a new sysctl know /sys/power/freeze_filesystems that will allow userspace to freeze filesystems during suspend/hibernate. For now it defaults to off. The thaw logic doesn't require checking whether freezing is enabled because the power subsystem exclusively owns frozen filesystems for the duration of suspend/hibernate and is able to skip filesystems it doesn't need to freeze. Also it is technically possible that filesystem filesystem_freeze_enabled is true and power freezes the filesystems but before freezing all processes another process disables filesystem_freeze_enabled. If power were to place the filesystems_thaw() call under filesystems_freeze_enabled it would fail to thaw the fileystems it frozw. The exclusive holder mechanism makes it possible to iterate through the list without any concern making sure that no filesystems are left frozen. Link: https://lore.kernel.org/r/20250402-work-freeze-v2-3-6719a97b52ac@kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
2025-05-08perf: Fix confusing aux iterationFrederic Weisbecker
While an event tears down all links to it as an aux, the iteration happens on the event's group leader instead of the group itself. If the event is a group leader, it has no effect because the event is also its own group leader. But otherwise there would be a risk to detach all the siblings events from the wrong group leader. It just happens to work because each sibling's aux link is tested against the right event before proceeding. Also the ctx lock is the same for the events and their group leader so the iteration is safe. Yet the iteration is confusing. Clarify the actual intent. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250424161128.29176-5-frederic@kernel.org
2025-05-08perf: Remove too early and redundant CPU hotplug handlingFrederic Weisbecker
The CPU hotplug handlers are called twice: at prepare and online stage. Their role is to: 1) Enable/disable a CPU context. This is irrelevant and even buggy at the prepare stage because the CPU is still offline. On early secondary CPU up, creating an event attached to that CPU might silently fail because the CPU context is observed as online but the context installation's IPI failure is ignored. 2) Update the scope cpumasks and re-migrate the events accordingly in the CPU down case. This is irrelevant at the prepare stage. 3) Remove the events attached to the context of the offlining CPU. It even uses an (unnecessary) IPI for it. This is also irrelevant at the prepare stage. Also none of the *_PREPARE and *_STARTING architecture perf related CPU hotplug callbacks rely on CPUHP_PERF_PREPARE. CPUHP_AP_PERF_ONLINE is enough and the right place to perform the work. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250424161128.29176-4-frederic@kernel.org
2025-05-08perf: Fix irq work dereferencing garbageFrederic Weisbecker
The following commit: da916e96e2de ("perf: Make perf_pmu_unregister() useable") has introduced two significant event's parent lifecycle changes: 1) An event that has exited now has EVENT_TOMBSTONE as a parent. This can result in a situation where the delayed wakeup irq_work can accidentally dereference EVENT_TOMBSTONE on: CPU 0 CPU 1 ----- ----- __schedule() local_irq_disable() rq_lock() <NMI> perf_event_overflow() irq_work_queue(&child->pending_irq) </NMI> perf_event_task_sched_out() raw_spin_lock(&ctx->lock) ctx_sched_out() ctx->is_active = 0 event_sched_out(child) raw_spin_unlock(&ctx->lock) perf_event_release_kernel(parent) perf_remove_from_context(child) raw_spin_lock_irq(&ctx->lock) // Sees !ctx->is_active // Removes from context inline __perf_remove_from_context(child) perf_child_detach(child) event->parent = EVENT_TOMBSTONE raw_spin_rq_unlock_irq(rq); <IRQ> perf_pending_irq() perf_event_wakeup(child) ring_buffer_wakeup(child) rcu_dereference(child->parent->rb) <--- CRASH This also concerns the call to kill_fasync() on parent->fasync. 2) The final parent reference count decrement can now happen before the the final child reference count decrement. ie: the parent can now be freed before its child. On PREEMPT_RT, this can result in a situation where the delayed wakeup irq_work can accidentally dereference a freed parent: CPU 0 CPU 1 CPU 2 ----- ----- ------ perf_pmu_unregister() pmu_detach_events() pmu_get_event() atomic_long_inc_not_zero(&child->refcount) <NMI> perf_event_overflow() irq_work_queue(&child->pending_irq); </NMI> <IRQ> irq_work_run() wake_irq_workd() </IRQ> preempt_schedule_irq() =========> SWITCH to workd irq_work_run_list() perf_pending_irq() perf_event_wakeup(child) ring_buffer_wakeup(child) event = child->parent perf_event_release_kernel(parent) // Not last ref, PMU holds it put_event(child) // Last ref put_event(parent) free_event() call_rcu(...) rcu_core() free_event_rcu() rcu_dereference(event->rb) <--- CRASH This also concerns the call to kill_fasync() on parent->fasync. The "easy" solution to 1) is to check that event->parent is not EVENT_TOMBSTONE on perf_event_wakeup() (including both ring buffer and fasync uses). The "easy" solution to 2) is to turn perf_event_wakeup() to wholefully run under rcu_read_lock(). However because of 2), sanity would prescribe to make event::parent an __rcu pointer and annotate each and every users to prove they are reliable. Propose an alternate solution and restore the stable pointer to the parent until all its children have called _free_event() themselves to avoid any further accident. Also revert the EVENT_TOMBSTONE design that is mostly here to determine which caller of perf_event_exit_event() must perform the refcount decrement on a child event matching the increment in inherit_event(). Arrange instead for checking the attach state of an event prior to its removal and decrement the refcount of the child accordingly. Fixes: da916e96e2de ("perf: Make perf_pmu_unregister() useable") Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2025-05-08perf: Fix failing inherit_event() doing extra refcount decrement on parentFrederic Weisbecker
When inherit_event() fails after the child allocation but before the parent refcount has been incremented, calling put_event() wrongly decrements the reference to the parent, risking to free it too early. Also pmu_get_event() can't be holding a reference to the child concurrently at this point since it is under pmus_srcu critical section. Fix it with restoring the deleted free_event() function and call it on the failing child in order to free it directly under the verified assumption that its refcount is only 1. The refcount to the parent is then voluntarily omitted. Fixes: da916e96e2de ("perf: Make perf_pmu_unregister() useable") Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250424161128.29176-2-frederic@kernel.org
2025-05-08treewide, timers: Rename destroy_timer_on_stack() as timer_destroy_on_stack()Ingo Molnar
Move this API to the canonical timer_*() namespace. Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250507175338.672442-10-mingo@kernel.org
2025-05-08treewide, timers: Rename try_to_del_timer_sync() as timer_delete_sync_try()Ingo Molnar
Move this API to the canonical timer_*() namespace. Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250507175338.672442-9-mingo@kernel.org
2025-05-08timers: Rename init_timers() as timers_init()Ingo Molnar
Move this API to the canonical timers_*() namespace. Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250507175338.672442-8-mingo@kernel.org
2025-05-08timers: Rename NEXT_TIMER_MAX_DELTA as TIMER_NEXT_MAX_DELTAIngo Molnar
Move this macro to the canonical TIMER_* namespace. Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250507175338.672442-7-mingo@kernel.org
2025-05-08timers: Rename init_timer_on_stack_key() as timer_init_key_on_stack()Ingo Molnar
Move this API to the canonical timer_*() namespace. Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250507175338.672442-4-mingo@kernel.org
2025-05-08timers: Rename init_timer_key() as timer_init_key()Ingo Molnar
Move this API to the canonical timer_*() namespace. Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250507175338.672442-3-mingo@kernel.org
2025-05-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.15-rc6). No conflicts. Adjacent changes: net/core/dev.c: 08e9f2d584c4 ("net: Lock netdevices during dev_shutdown") a82dc19db136 ("net: avoid potential race between netdev_get_by_index_lock() and netns switch") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-08tracing: Just use this_cpu_read() to access ignore_pidSteven Rostedt
The ignore_pid boolean on the per CPU data descriptor is updated at sched_switch when a new task is scheduled in. If the new task is to be ignored, it is set to true, otherwise it is set to false. The current task should always have the correct value as it is updated when the task is scheduled in. Instead of breaking up the read of this value, which requires preemption to be disabled, just use this_cpu_read() which gives a snapshot of the value. Since the value will always be correct for a given task (because it's updated at sched switch) it doesn't need preemption disabled. This will also allow trace events to be called with preemption enabled. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/20250505212235.038958766@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-08ftrace: Do not bother checking per CPU "disabled" flagSteven Rostedt
The per CPU "disabled" value was the original way to disable tracing when the tracing subsystem was first created. Today, the ring buffer infrastructure has its own way to disable tracing. In fact, things have changed so much since 2008 that many things ignore the disable flag. There's no reason for the function tracer to check it, if tracing is disabled, the ring buffer will not record the event anyway. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/20250505212234.868972758@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-08tracing/mmiotrace: Remove reference to unused per CPU data pointerSteven Rostedt
The mmiotracer referenced the per CPU array_buffer->data descriptor but never actually used it. Remove the references to it. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lore.kernel.org/20250505212234.696945463@goodmis.org Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-08tracing/osnoise: Allow arbitrarily long CPU stringTomas Glozar
Allocate kernel memory for processing CPU string (/sys/kernel/tracing/osnoise/cpus) also in osnoise_cpus_write to allow the writing of a CPU string of an arbitrary length. This replaces the 256-byte buffer, which is insufficient with the rising number of CPUs. For example, if I wanted to measure on every even CPU on a system with 256 CPUs, the string would be 456 characters long. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/20250425091839.343289-1-tglozar@redhat.com Signed-off-by: Tomas Glozar <tglozar@redhat.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-08ftrace: Comment that ftrace_func_mapper is freed with free_ftrace_hash()Steven Rostedt
The structure ftrace_func_mapper only contains a single field and that is a ftrace_hash. It is used to abstract it out from a normal hash to control users of how it gets modified. The freeing of a ftrace_func_mapper structure is: free_ftrace_hash(&mapper->hash); Without context, this looks like a bug. It should be commented that it is not a bug and it is freed this way. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/20250416165420.5c717420@gandalf.local.home Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-08ftrace: Expose call graph depth as unsigned intIlya Leoshkevich
Depth is stored as int because the code uses negative values to break out of iterations. But what is recorded is always zero or positive. So expose it as unsigned int instead of int. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Link: https://lore.kernel.org/20250412221847.17310-3-iii@linux.ibm.com Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-08ftrace: Show subops in enabled_functionsSteven Rostedt
The function graph infrastructure uses subops of the function tracer. These are not shown in enabled_functions. Add a "subops:" section to the enabled_functions line to show what functions are attached via subops. If the subops is from the function_graph infrastructure, then show the entry and return callbacks that are attached. Here's an example of the output: schedule_on_each_cpu (1) tramp: 0xffffffffc03ef000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60 subops: {ent:trace_graph_entry+0x0/0x20 ret:trace_graph_return+0x0/0x150} Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/20250410153830.5d97f108@gandalf.local.home Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-08genirq/cpuhotplug: Fix up lock guards conversion brainf..tThomas Gleixner
The lock guard conversion converted raw_spin_lock_irq() to scoped_guard(raw_spinlock), which is obviously bogus and makes lockdep mightily unhappy. Note to self: Copy and pasta without using brain is a patently bad idea. Fixes: 88a4df117ad6 ("genirq/cpuhotplug: Convert to lock guards") Reported-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Borislav Petkov <bp@alien8.de>
2025-05-07cpufreq/sched: Move cpufreq-specific EAS checks to cpufreqRafael J. Wysocki
Doing cpufreq-specific EAS checks that require accessing policy internals directly from sched_is_eas_possible() is a bit unfortunate, so introduce cpufreq_ready_for_eas() in cpufreq, move those checks into that new function and make sched_is_eas_possible() call it. While at it, address a possible race between the EAS governor check and governor change by doing the former under the policy rwsem. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Christian Loehle <christian.loehle@arm.com> Tested-by: Christian Loehle <christian.loehle@arm.com> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Link: https://patch.msgid.link/2317800.iZASKD2KPV@rjwysocki.net
2025-05-07cpufreq/sched: schedutil: Add helper for governor checksRafael J. Wysocki
Add a helper for checking if schedutil is the current governor for a given cpufreq policy and use it in sched_is_eas_possible() to avoid accessing cpufreq policy internals directly from there. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Christian Loehle <christian.loehle@arm.com> Tested-by: Christian Loehle <christian.loehle@arm.com> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Link: https://patch.msgid.link/3365956.44csPzL39Z@rjwysocki.net
2025-05-07module: ensure that kobject_put() is safe for module type kobjectsDmitry Antipov
In 'lookup_or_create_module_kobject()', an internal kobject is created using 'module_ktype'. So call to 'kobject_put()' on error handling path causes an attempt to use an uninitialized completion pointer in 'module_kobject_release()'. In this scenario, we just want to release kobject without an extra synchronization required for a regular module unloading process, so adding an extra check whether 'complete()' is actually required makes 'kobject_put()' safe. Reported-by: syzbot+7fb8a372e1f6add936dd@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=7fb8a372e1f6add936dd Fixes: 942e443127e9 ("module: Fix mod->mkobj.kobj potentially freed too early") Cc: stable@vger.kernel.org Suggested-by: Petr Pavlu <petr.pavlu@suse.com> Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Link: https://lore.kernel.org/r/20250507065044.86529-1-dmantipov@yandex.ru Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
2025-05-07Merge branch 'for-6.15-fixes' into for-6.16Tejun Heo
To receive 428dc9fc0873 ("sched_ext: bpf_iter_scx_dsq_new() should always initialize iterator") which conflicts with cdf5a6faa8cf ("sched_ext: Move dsq_hash into scx_sched"). The conflict is a simple context conflict which can be resolved by taking changes from both changes in the right order.
2025-05-07sched_ext: bpf_iter_scx_dsq_new() should always initialize iteratorTejun Heo
BPF programs may call next() and destroy() on BPF iterators even after new() returns an error value (e.g. bpf_for_each() macro ignores error returns from new()). bpf_iter_scx_dsq_new() could leave the iterator in an uninitialized state after an error return causing bpf_iter_scx_dsq_next() to dereference garbage data. Make bpf_iter_scx_dsq_new() always clear $kit->dsq so that next() and destroy() become noops. Signed-off-by: Tejun Heo <tj@kernel.org> Fixes: 650ba21b131e ("sched_ext: Implement DSQ iterator") Cc: stable@vger.kernel.org # v6.12+ Acked-by: Andrea Righi <arighi@nvidia.com>
2025-05-07genirq: Use scoped_guard() to shut clang upThomas Gleixner
This code pattern trips clang up: if (fail) goto undo; guard(lock)(lock); do_stuff(); return 0; undo: ... as it somehow extends the scope of the guard beyond the return statement. Replace it with a scoped guard to help it to get its act together. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Closes: https://lore.kernel.org/oe-kbuild-all/202505071809.ajpPxfoZ-lkp@intel.com/
2025-05-07PM: hibernate: split and simplify hib_submit_ioChristoph Hellwig
Split hib_submit_io into a sync and async version. The sync version is a small wrapper around bdev_rw_virt which implements all the logic to add a kernel direct mapping range to a bio and synchronously submits it, while the async version is slightly simplified using the bio_add_virt_nofail for adding the single range. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Rafael J. Wysocki <rafael@kernel.org> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Link: https://lore.kernel.org/r/20250507120451.4000627-13-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-05-07genirq: Remove unused remove_percpu_irq()Dr. David Alan Gilbert
remove_percpu_irq() has been unused since it was added in 2011 by commit 31d9d9b6d830 ("genirq: Add support for per-cpu dev_id interrupts") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250420164656.112641-1-linux@treblig.org
2025-05-07genirq: Remove irq_[get|put]_desc*()Thomas Gleixner
All users are converted to the guards. Remove the helpers. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065422.729586582@linutronix.de
2025-05-07genirq/manage: Rework irq_set_irqchip_state()Thomas Gleixner
Use the new guards to get and lock the interrupt descriptor and tidy up the code. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065422.670808288@linutronix.de
2025-05-07genirq/manage: Rework irq_get_irqchip_state()Thomas Gleixner
Use the new guards to get and lock the interrupt descriptor and tidy up the code. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065422.612184618@linutronix.de
2025-05-07genirq/manage: Rework teardown_percpu_nmi()Thomas Gleixner
Use the new guards to get and lock the interrupt descriptor and tidy up the code. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065422.552884529@linutronix.de