diff options
| author | jun qian <qianjun.kernel@gmail.com> | 2020-10-15 14:48:46 +0800 |
|---|---|---|
| committer | Peter Zijlstra <peterz@infradead.org> | 2020-10-29 11:00:28 +0100 |
| commit | b9c88f752268383beff0d56e50d52b8ae62a02f8 (patch) | |
| tree | 6cc27e8fb7f7b2b60174467de59f56b471886e7b | |
| parent | 23859ae44402f4d935b9ee548135dd1e65e2cbf4 (diff) | |
sched/fair: Improve the accuracy of sched_stat_wait statistics
When the sched_schedstat changes from 0 to 1, some sched se maybe
already in the runqueue, the se->statistics.wait_start will be 0.
So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
wrong. We need to avoid this scenario.
Signed-off-by: jun qian <qianjun.kernel@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lkml.kernel.org/r/20201015064846.19809-1-qianjun.kernel@gmail.com
| -rw-r--r-- | kernel/sched/fair.c | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 290f9e38378c..b9368d123451 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -906,6 +906,15 @@ update_stats_wait_end(struct cfs_rq *cfs_rq, struct sched_entity *se) if (!schedstat_enabled()) return; + /* + * When the sched_schedstat changes from 0 to 1, some sched se + * maybe already in the runqueue, the se->statistics.wait_start + * will be 0.So it will let the delta wrong. We need to avoid this + * scenario. + */ + if (unlikely(!schedstat_val(se->statistics.wait_start))) + return; + delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start); if (entity_is_task(se)) { |
