diff options
| author | Kuniyuki Iwashima <kuniyu@google.com> | 2025-09-19 08:35:30 +0000 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2025-09-22 11:38:43 -0700 |
| commit | bb6f9445666e1ed9f39c805e153243a65ea05257 (patch) | |
| tree | c1b999f046c844921304aa9fcccc463bc06023c8 /rust/helpers/build_bug.c | |
| parent | 0ac44301e3bf4f5abc892ab530188ca95c61e59f (diff) | |
tcp: Remove redundant sk_unhashed() in inet_unhash().
inet_unhash() checks sk_unhashed() twice at the entry and after locking
ehash/lhash bucket.
The former was somehow added redundantly by commit 4f9bf2a2f5aa ("tcp:
Don't acquire inet_listen_hashbucket::lock with disabled BH.").
inet_unhash() is called for the full socket from 4 places, and it is
always under lock_sock() or the socket is not yet published to other
threads:
1. __sk_prot_rehash()
-> called from inet_sk_reselect_saddr(), which has
lockdep_sock_is_held()
2. sk_common_release()
-> called when inet_create() or inet6_create() fail, then the
socket is not yet published
3. tcp_set_state()
-> calls tcp_call_bpf_2arg(), and tcp_call_bpf() has
sock_owned_by_me()
4. inet_ctl_sock_create()
-> creates a kernel socket and unhashes it immediately, but TCP
socket is not hashed in sock_create_kern() (only SOCK_RAW is)
So we do not need to check sk_unhashed() twice before/after ehash/lhash
lock in inet_unhash().
Let's remove the 2nd one.
Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://patch.msgid.link/20250919083706.1863217-4-kuniyu@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'rust/helpers/build_bug.c')
0 files changed, 0 insertions, 0 deletions
