summaryrefslogtreecommitdiff
path: root/kernel/fork.c
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2015-05-13 16:35:17 -0400
committerTejun Heo <tj@kernel.org>2015-05-26 20:35:00 -0400
commitd59cfc09c32a2ae31f1c3bc2983a0cd79afb3f14 (patch)
tree077533cef8f5e16c8f7fd65d7e255d75828f3820 /kernel/fork.c
parent7d7efec368d537226142cbe559f45797f18672f9 (diff)
sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem
The cgroup side of threadgroup locking uses signal_struct->group_rwsem to synchronize against threadgroup changes. This per-process rwsem adds small overhead to thread creation, exit and exec paths, forces cgroup code paths to do lock-verify-unlock-retry dance in a couple places and makes it impossible to atomically perform operations across multiple processes. This patch replaces signal_struct->group_rwsem with a global percpu_rwsem cgroup_threadgroup_rwsem which is cheaper on the reader side and contained in cgroups proper. This patch converts one-to-one. This does make writer side heavier and lower the granularity; however, cgroup process migration is a fairly cold path, we do want to optimize thread operations over it and cgroup migration operations don't take enough time for the lower granularity to matter. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
Diffstat (limited to 'kernel/fork.c')
-rw-r--r--kernel/fork.c4
1 files changed, 0 insertions, 4 deletions
diff --git a/kernel/fork.c b/kernel/fork.c
index 03c1eaaa6ef5..9531275e12a9 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1144,10 +1144,6 @@ static int copy_signal(unsigned long clone_flags, struct task_struct *tsk)
tty_audit_fork(sig);
sched_autogroup_fork(sig);
-#ifdef CONFIG_CGROUPS
- init_rwsem(&sig->group_rwsem);
-#endif
-
sig->oom_score_adj = current->signal->oom_score_adj;
sig->oom_score_adj_min = current->signal->oom_score_adj_min;