summaryrefslogtreecommitdiff
path: root/kernel/trace/fgraph.c
diff options
context:
space:
mode:
authorValentin Schneider <valentin.schneider@arm.com>2022-01-20 16:25:19 +0000
committerPeter Zijlstra <peterz@infradead.org>2022-03-01 16:18:39 +0100
commitfa2c3254d7cfff5f7a916ab928a562d1165f17bb (patch)
tree678cc10a62564212f526fc4a65ea345fde95794e /kernel/trace/fgraph.c
parent49bef33e4b87b743495627a529029156c6e09530 (diff)
sched/tracing: Don't re-read p->state when emitting sched_switch event
As of commit c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") the following sequence becomes possible: p->__state = TASK_INTERRUPTIBLE; __schedule() deactivate_task(p); ttwu() READ !p->on_rq p->__state=TASK_WAKING trace_sched_switch() __trace_sched_switch_state() task_state_index() return 0; TASK_WAKING isn't in TASK_REPORT, so the task appears as TASK_RUNNING in the trace event. Prevent this by pushing the value read from __schedule() down the trace event. Reported-by: Abhijeet Dharmapurikar <adharmap@quicinc.com> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Link: https://lore.kernel.org/r/20220120162520.570782-2-valentin.schneider@arm.com
Diffstat (limited to 'kernel/trace/fgraph.c')
-rw-r--r--kernel/trace/fgraph.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index 22061d38fc00..19028e072cdb 100644
--- a/kernel/trace/fgraph.c
+++ b/kernel/trace/fgraph.c
@@ -415,7 +415,9 @@ free:
static void
ftrace_graph_probe_sched_switch(void *ignore, bool preempt,
- struct task_struct *prev, struct task_struct *next)
+ unsigned int prev_state,
+ struct task_struct *prev,
+ struct task_struct *next)
{
unsigned long long timestamp;
int index;