diff options
author | Martin KaFai Lau <martin.lau@kernel.org> | 2022-09-01 12:11:45 -0700 |
---|---|---|
committer | Martin KaFai Lau <martin.lau@kernel.org> | 2022-09-01 12:16:23 -0700 |
commit | 23d86c8e02e55e511cd37e7ef2280a0030750954 (patch) | |
tree | 9e9824013d4e2ac7974944f22f2f90dfab154814 /scripts/bpf_doc.py | |
parent | c9ae8c966f05c85c5928c8f1790b13b71cc5ccd5 (diff) | |
parent | 73b97bc78b32eb739a7dd3394fa3981e8021c0ef (diff) |
Merge branch 'Use this_cpu_xxx for preemption-safety'
Hou Tao says:
====================
From: Hou Tao <houtao1@huawei.com>
Hi,
The patchset aims to make the update of per-cpu prog->active and per-cpu
bpf_task_storage_busy being preemption-safe. The problem is on same
architectures (e.g. arm64), __this_cpu_{inc|dec|inc_return} are neither
preemption-safe nor IRQ-safe, so under fully preemptible kernel the
concurrent updates on these per-cpu variables may be interleaved and the
final values of these variables may be not zero.
Patch 1 & 2 use the preemption-safe per-cpu helpers to manipulate
prog->active and bpf_task_storage_busy. Patch 3 & 4 add a test case in
map_tests to show the concurrent updates on the per-cpu
bpf_task_storage_busy by using __this_cpu_{inc|dec} are not atomic.
Comments are always welcome.
Regards,
Tao
Change Log:
v2:
* Patch 1: update commit message to indicate the problem is only
possible for fully preemptible kernel
* Patch 2: a new patch which fixes the problem for prog->active
* Patch 3 & 4: move it to test_maps and make it depend on CONFIG_PREEMPT
v1: https://lore.kernel.org/bpf/20220829142752.330094-1-houtao@huaweicloud.com/
====================
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Diffstat (limited to 'scripts/bpf_doc.py')
0 files changed, 0 insertions, 0 deletions