diff options
author | Tejun Heo <tj@kernel.org> | 2025-04-29 08:40:10 -1000 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2025-04-29 08:40:10 -1000 |
commit | a77d10d032f511b027d80ef0640309a73e2561fa (patch) | |
tree | 4f7d9d85a67f59994263273d9434508242b8d286 /kernel/sched/ext.c | |
parent | 48e12677738663c6ac7be6abe7b216ec74a5b6e6 (diff) |
sched_ext: Avoid NULL scx_root deref through SCX_HAS_OP()
SCX_HAS_OP() tests scx_root->has_op bitmap. The bitmap is currently in a
statically allocated struct scx_sched and initialized while loading the BPF
scheduler and cleared while unloading, and thus can be tested anytime.
However, scx_root will be switched to dynamic allocation and thus won't
always be deferenceable.
Most usages of SCX_HAS_OP() are already protected by scx_enabled() either
directly or indirectly (e.g. through a task which is on SCX). However, there
are a couple places that could try to dereference NULL scx_root. Update them
so that scx_root is guaranteed to be valid before SCX_HAS_OP() is called.
- In handle_hotplug(), test whether scx_root is NULL before doing anything
else. This is safe because scx_root updates will be protected by
cpus_read_lock().
- In scx_tg_offline(), test scx_cgroup_enabled before invoking SCX_HAS_OP(),
which should guarnatee that scx_root won't turn NULL. This is also in line
with other cgroup operations. As the code path is synchronized against
scx_cgroup_init/exit() through scx_cgroup_rwsem, this shouldn't cause any
behavior differences.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Andrea Righi <arighi@nvidia.com>
Acked-by: Changwoo Min <changwoo@igalia.com>
Diffstat (limited to 'kernel/sched/ext.c')
-rw-r--r-- | kernel/sched/ext.c | 11 |
1 files changed, 10 insertions, 1 deletions
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 363890f38e55..784bdf12db44 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -3498,6 +3498,14 @@ static void handle_hotplug(struct rq *rq, bool online) atomic_long_inc(&scx_hotplug_seq); + /* + * scx_root updates are protected by cpus_read_lock() and will stay + * stable here. Note that we can't depend on scx_enabled() test as the + * hotplug ops need to be enabled before __scx_enabled is set. + */ + if (!scx_root) + return; + if (scx_enabled()) scx_idle_update_selcpu_topology(&scx_root->ops); @@ -3994,7 +4002,8 @@ void scx_tg_offline(struct task_group *tg) percpu_down_read(&scx_cgroup_rwsem); - if (SCX_HAS_OP(scx_root, cgroup_exit) && (tg->scx_flags & SCX_TG_INITED)) + if (scx_cgroup_enabled && SCX_HAS_OP(scx_root, cgroup_exit) && + (tg->scx_flags & SCX_TG_INITED)) SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_exit, NULL, tg->css.cgroup); tg->scx_flags &= ~(SCX_TG_ONLINE | SCX_TG_INITED); |