diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2026-01-08 08:47:05 -1000 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2026-01-08 08:47:05 -1000 |
| commit | 5572ad8fddecd4a0db19801262072ff5916b7589 (patch) | |
| tree | 2f4572a9a6b14d511581f0c2467f2e323c687ea4 /kernel | |
| parent | f2a3b12b305c7bb72467b2a56d19a4587b6007f9 (diff) | |
| parent | 1e2ed4bfd50ace3c4272cfab7e9aa90956fb7ae0 (diff) | |
Merge tag 'trace-v6.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-traceHEADmaster
Pull tracing fixes from Steven Rostedt:
- Remove useless assignment of soft_mode variable
The function __ftrace_event_enable_disable() sets "soft_mode" in one
of the branch paths but doesn't use it after that. Remove the setting
of that variable.
- Add a cond_resched() in ring_buffer_resize()
The resize function that allocates all the pages for the ring buffer
was causing a soft lockup on PREEMPT_NONE configs when allocating
large buffers on machines with many CPUs. Hopefully this is the last
cond_resched() needed to be added as PREEMPT_LAZY becomes the norm in
the future.
- Make ftrace_graph_ent depth field signed
The "depth" field of struct ftrace_graph_ent was converted from "int"
to "unsigned long" for alignment reasons to work with being embedded
in other structures. The conversion from a signed to unsigned caused
integrity checks to always pass as they were comparing "depth" to
less than zero. Make the field signed long.
- Add recursion protection to stack trace events
A infinite recursion was triggered by a stack trace event calling RCU
which internally called rcu_read_unlock_special(), which triggered an
event that was also doing stacktraces which cause it to trigger the
same RCU lock that called rcu_read_unlock_special() again.
Update the trace_test_and_set_recursion() to add a set of context
checks for events to use, and have the stack trace event use that for
recursion protection.
- Make the variable ftrace_dump_on_oops static
The cleanup of sysctl that moved all the updates to the files that
use them moved the reference of ftrace_dump_on_oops to where it is
used. It is no longer used outside of the trace.c file. Make it
static.
* tag 'trace-v6.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
trace: ftrace_dump_on_oops[] is not exported, make it static
tracing: Add recursion protection in kernel stack trace recording
ftrace: Make ftrace_graph_ent depth field signed
ring-buffer: Avoid softlockup in ring_buffer_resize() during memory free
tracing: Drop unneeded assignment to soft_mode
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/trace/ring_buffer.c | 2 | ||||
| -rw-r--r-- | kernel/trace/trace.c | 8 | ||||
| -rw-r--r-- | kernel/trace/trace_events.c | 7 |
3 files changed, 12 insertions, 5 deletions
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 41c9f5d079be..630221b00838 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -3137,6 +3137,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size, list) { list_del_init(&bpage->list); free_buffer_page(bpage); + + cond_resched(); } } out_err_unlock: diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 6f2148df14d9..baec63134ab6 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -138,7 +138,7 @@ cpumask_var_t __read_mostly tracing_buffer_mask; * by commas. */ /* Set to string format zero to disable by default */ -char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0"; +static char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0"; /* When set, tracing will stop when a WARN*() is hit */ static int __disable_trace_on_warning; @@ -3012,6 +3012,11 @@ static void __ftrace_trace_stack(struct trace_array *tr, struct ftrace_stack *fstack; struct stack_entry *entry; int stackidx; + int bit; + + bit = trace_test_and_set_recursion(_THIS_IP_, _RET_IP_, TRACE_EVENT_START); + if (bit < 0) + return; /* * Add one, for this function and the call to save_stack_trace() @@ -3080,6 +3085,7 @@ static void __ftrace_trace_stack(struct trace_array *tr, /* Again, don't let gcc optimize things here */ barrier(); __this_cpu_dec(ftrace_stack_reserve); + trace_clear_recursion(bit); } static inline void ftrace_trace_stack(struct trace_array *tr, diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index 76067529db61..137b4d9bb116 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -826,16 +826,15 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file, * When soft_disable is set and enable is set, we want to * register the tracepoint for the event, but leave the event * as is. That means, if the event was already enabled, we do - * nothing (but set soft_mode). If the event is disabled, we - * set SOFT_DISABLED before enabling the event tracepoint, so - * it still seems to be disabled. + * nothing. If the event is disabled, we set SOFT_DISABLED + * before enabling the event tracepoint, so it still seems + * to be disabled. */ if (!soft_disable) clear_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags); else { if (atomic_inc_return(&file->sm_ref) > 1) break; - soft_mode = true; /* Enable use of trace_buffered_event */ trace_buffered_event_enable(); } |
