summaryrefslogtreecommitdiff
path: root/kernel/sched
diff options
context:
space:
mode:
authorVincent Donnefort <vincent.donnefort@arm.com>2021-06-21 11:37:52 +0100
committerPeter Zijlstra <peterz@infradead.org>2021-06-22 16:41:59 +0200
commitd7d607096ae6d378b4e92d49946d22739c047d4c (patch)
tree0df59f0fce007681a493960a9caeee22023a9589 /kernel/sched
parentfecfcbc288e9f4923f40fd23ca78a6acdc7fdf6c (diff)
sched/rt: Fix Deadline utilization tracking during policy change
DL keeps track of the utilization on a per-rq basis with the structure avg_dl. This utilization is updated during task_tick_dl(), put_prev_task_dl() and set_next_task_dl(). However, when the current running task changes its policy, set_next_task_dl() which would usually take care of updating the utilization when the rq starts running DL tasks, will not see a such change, leaving the avg_dl structure outdated. When that very same task will be dequeued later, put_prev_task_dl() will then update the utilization, based on a wrong last_update_time, leading to a huge spike in the DL utilization signal. The signal would eventually recover from this issue after few ms. Even if no DL tasks are run, avg_dl is also updated in __update_blocked_others(). But as the CPU capacity depends partly on the avg_dl, this issue has nonetheless a significant impact on the scheduler. Fix this issue by ensuring a load update when a running task changes its policy to DL. Fixes: 3727e0e ("sched/dl: Add dl_rq utilization tracking") Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/1624271872-211872-3-git-send-email-vincent.donnefort@arm.com
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/deadline.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 22878cd5bd70..aaacd6cfd42f 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2497,6 +2497,8 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
check_preempt_curr_dl(rq, p, 0);
else
resched_curr(rq);
+ } else {
+ update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);
}
}