summaryrefslogtreecommitdiff
path: root/kernel/sched
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2017-05-11 18:16:06 +0200
committerIngo Molnar <mingo@kernel.org>2017-09-29 19:35:11 +0200
commit3d4b60d3e3dde6ea24e439000eb3b71078da81f1 (patch)
tree98b441ccd2236ce2a8efebeb8bb961f2d45730ec /kernel/sched
parentcef27403cbe98ebda0a32d43128dd60c309eb966 (diff)
sched/fair: Cure calc_cfs_shares() vs. reweight_entity()
Vincent reported that when running in a cgroup, his root cfs_rq->avg.load_avg dropped to 0 on task idle. This is because reweight_entity() will now immediately propagate the weight change of the group entity to its cfs_rq, and as it happens, our approxmation (5) for calc_cfs_shares() results in 0 when the group is idle. Avoid this by using the correct (3) as a lower bound on (5). This way the empty cgroup will slowly decay instead of instantly drop to 0. Reported-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/fair.c7
1 files changed, 3 insertions, 4 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index dd565aeafc5a..63166a0ed854 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2763,11 +2763,10 @@ static long calc_cfs_shares(struct cfs_rq *cfs_rq)
tg_shares = READ_ONCE(tg->shares);
/*
- * This really should be: cfs_rq->avg.load_avg, but instead we use
- * cfs_rq->load.weight, which is its upper bound. This helps ramp up
- * the shares for small weight interactive tasks.
+ * Because (5) drops to 0 when the cfs_rq is idle, we need to use (3)
+ * as a lower bound.
*/
- load = scale_load_down(cfs_rq->load.weight);
+ load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
tg_weight = atomic_long_read(&tg->load_avg);