summaryrefslogtreecommitdiff
path: root/lib/usercopy.c
diff options
context:
space:
mode:
authorChengming Zhou <zhouchengming@bytedance.com>2022-08-18 20:47:57 +0800
committerPeter Zijlstra <peterz@infradead.org>2022-08-23 11:01:17 +0200
commit78b6b15770618efb60d84e2d605f6b93dc94051b (patch)
tree88e2e48523e6e4f96f5d5a228594df6d9e19da46 /lib/usercopy.c
parent8648f92a66a323ed01903d2cbb248cdbe2f312d9 (diff)
sched/fair: Maintain task se depth in set_task_rq()
Previously we only maintain task se depth in task_move_group_fair(), if a !fair task change task group, its se depth will not be updated, so commit eb7a59b2c888 ("sched/fair: Reset se-depth when task switched to FAIR") fix the problem by updating se depth in switched_to_fair() too. Then commit daa59407b558 ("sched/fair: Unify switched_{from,to}_fair() and task_move_group_fair()") unified these two functions, moved se.depth setting to attach_task_cfs_rq(), which further into attach_entity_cfs_rq() with commit df217913e72e ("sched/fair: Factorize attach/detach entity"). This patch move task se depth maintenance from attach_entity_cfs_rq() to set_task_rq(), which will be called when CPU/cgroup change, so its depth will always be correct. This patch is preparation for the next patch. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/20220818124805.601-2-zhouchengming@bytedance.com
Diffstat (limited to 'lib/usercopy.c')
0 files changed, 0 insertions, 0 deletions