summaryrefslogtreecommitdiff
path: root/net/sched
diff options
context:
space:
mode:
authorPaul E. McKenney <paulmck@kernel.org>2023-10-18 15:28:32 -0700
committerDaniel Borkmann <daniel@iogearbox.net>2023-10-24 14:26:07 +0200
commit06646da01458682023321bdc7553b8140e95d077 (patch)
tree31454fa3d9494e804d54f3a6fc986be93ebca2dc /net/sched
parentd35381aa73f7e1e8b25f3ed5283287a64d9ddff5 (diff)
bpf: Fold smp_mb__before_atomic() into atomic_set_release()
The bpf_user_ringbuf_drain() BPF_CALL function uses an atomic_set() immediately preceded by smp_mb__before_atomic() so as to order storing of ring-buffer consumer and producer positions prior to the atomic_set() call's clearing of the ->busy flag, as follows: smp_mb__before_atomic(); atomic_set(&rb->busy, 0); Although this works given current architectures and implementations, and given that this only needs to order prior writes against a later write. However, it does so by accident because the smp_mb__before_atomic() is only guaranteed to work with read-modify-write atomic operations, and not at all with things like atomic_set() and atomic_read(). Note especially that smp_mb__before_atomic() will not, repeat *not*, order the prior write to "a" before the subsequent non-read-modify-write atomic read from "b", even on strongly ordered systems such as x86: WRITE_ONCE(a, 1); smp_mb__before_atomic(); r1 = atomic_read(&b); Therefore, replace the smp_mb__before_atomic() and atomic_set() with atomic_set_release() as follows: atomic_set_release(&rb->busy, 0); This is no slower (and sometimes is faster) than the original, and also provides a formal guarantee of ordering that the original lacks. Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/ec86d38e-cfb4-44aa-8fdb-6c925922d93c@paulmck-laptop
Diffstat (limited to 'net/sched')
0 files changed, 0 insertions, 0 deletions