diff options
author | Peter Zijlstra <peterz@infradead.org> | 2021-03-03 16:45:41 +0100 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2021-05-12 11:43:27 +0200 |
commit | 9ef7e7e33bcdb57be1afb28884053c28b5f05240 (patch) | |
tree | 40e43fa4c6d82adf7cd39fbc1f5dfb868701165b /kernel/sched/fair.c | |
parent | 9edeaea1bc452372718837ed2ba775811baf1ba1 (diff) |
sched: Optimize rq_lockp() usage
rq_lockp() includes a static_branch(), which is asm-goto, which is
asm volatile which defeats regular CSE. This means that:
if (!static_branch(&foo))
return simple;
if (static_branch(&foo) && cond)
return complex;
Doesn't fold and we get horrible code. Introduce __rq_lockp() without
the static_branch() on.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Don Hiatt <dhiatt@digitalocean.com>
Tested-by: Hongyu Ning <hongyu.ning@linux.intel.com>
Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/20210422123308.316696988@infradead.org
Diffstat (limited to 'kernel/sched/fair.c')
-rw-r--r-- | kernel/sched/fair.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e50bd75067d5..18960d00708a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1107,7 +1107,7 @@ struct numa_group { static struct numa_group *deref_task_numa_group(struct task_struct *p) { return rcu_dereference_check(p->numa_group, p == current || - (lockdep_is_held(rq_lockp(task_rq(p))) && !READ_ONCE(p->on_cpu))); + (lockdep_is_held(__rq_lockp(task_rq(p))) && !READ_ONCE(p->on_cpu))); } static struct numa_group *deref_curr_numa_group(struct task_struct *p) |