summaryrefslogtreecommitdiff
path: root/net
AgeCommit message (Collapse)Author
2023-10-01net: prevent address rewrite in kernel_bind()Jordan Rife
Similar to the change in commit 0bdf399342c5("net: Avoid address overwrite in kernel_connect"), BPF hooks run on bind may rewrite the address passed to kernel_bind(). This change 1) Makes a copy of the bind address in kernel_bind() to insulate callers. 2) Replaces direct calls to sock->ops->bind() in net with kernel_bind() Link: https://lore.kernel.org/netdev/20230912013332.2048422-1-jrife@google.com/ Fixes: 4fbac77d2d09 ("bpf: Hooks for sys_bind") Cc: stable@vger.kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jordan Rife <jrife@google.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: prevent rewrite of msg_name in sock_sendmsg()Jordan Rife
Callers of sock_sendmsg(), and similarly kernel_sendmsg(), in kernel space may observe their value of msg_name change in cases where BPF sendmsg hooks rewrite the send address. This has been confirmed to break NFS mounts running in UDP mode and has the potential to break other systems. This patch: 1) Creates a new function called __sock_sendmsg() with same logic as the old sock_sendmsg() function. 2) Replaces calls to sock_sendmsg() made by __sys_sendto() and __sys_sendmsg() with __sock_sendmsg() to avoid an unnecessary copy, as these system calls are already protected. 3) Modifies sock_sendmsg() so that it makes a copy of msg_name if present before passing it down the stack to insulate callers from changes to the send address. Link: https://lore.kernel.org/netdev/20230912013332.2048422-1-jrife@google.com/ Fixes: 1cedee13d25a ("bpf: Hooks for sys_sendmsg") Cc: stable@vger.kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jordan Rife <jrife@google.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: replace calls to sock->ops->connect() with kernel_connect()Jordan Rife
commit 0bdf399342c5 ("net: Avoid address overwrite in kernel_connect") ensured that kernel_connect() will not overwrite the address parameter in cases where BPF connect hooks perform an address rewrite. This change replaces direct calls to sock->ops->connect() in net with kernel_connect() to make these call safe. Link: https://lore.kernel.org/netdev/20230912013332.2048422-1-jrife@google.com/ Fixes: d74bad4e74ee ("bpf: Hooks for sys_connect") Cc: stable@vger.kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jordan Rife <jrife@google.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: annotate data-races around sk->sk_dst_pending_confirmEric Dumazet
This field can be read or written without socket lock being held. Add annotations to avoid load-store tearing. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: lockless implementation of SO_TXREHASHEric Dumazet
sk->sk_txrehash readers are already safe against concurrent change of this field. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: implement lockless SO_MAX_PACING_RATEEric Dumazet
SO_MAX_PACING_RATE setsockopt() does not need to hold the socket lock, because sk->sk_pacing_rate readers can run fine if the value is changed by other threads, after adding READ_ONCE() accessors. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: lockless implementation of SO_BUSY_POLL, SO_PREFER_BUSY_POLL, ↵Eric Dumazet
SO_BUSY_POLL_BUDGET Setting sk->sk_ll_usec, sk_prefer_busy_poll and sk_busy_poll_budget do not require the socket lock, readers are lockless anyway. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: lockless SO_{TYPE|PROTOCOL|DOMAIN|ERROR } setsockopt()Eric Dumazet
This options can not be set and return -ENOPROTOOPT, no need to acqure socket lock. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: lockless SO_PASSCRED, SO_PASSPIDFD and SO_PASSSECEric Dumazet
sock->flags are atomic, no need to hold the socket lock in sk_setsockopt() for SO_PASSCRED, SO_PASSPIDFD and SO_PASSSEC. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: implement lockless SO_PRIORITYEric Dumazet
This is a followup of 8bf43be799d4 ("net: annotate data-races around sk->sk_priority"). sk->sk_priority can be read and written without holding the socket lock. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Wenjia Zhang <wenjia@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01openvswitch: reduce stack usage in do_execute_actionsIlya Maximets
do_execute_actions() function can be called recursively multiple times while executing actions that require pipeline forking or recirculations. It may also be re-entered multiple times if the packet leaves openvswitch module and re-enters it through a different port. Currently, there is a 256-byte array allocated on stack in this function that is supposed to hold NSH header. Compilers tend to pre-allocate that space right at the beginning of the function: a88: 48 81 ec b0 01 00 00 sub $0x1b0,%rsp NSH is not a very common protocol, but the space is allocated on every recursive call or re-entry multiplying the wasted stack space. Move the stack allocation to push_nsh() function that is only used if NSH actions are actually present. push_nsh() is also a simple function without a possibility for re-entry, so the stack is returned right away. With this change the preallocated space is reduced by 256 B per call: b18: 48 81 ec b0 00 00 00 sub $0xb0,%rsp Signed-off-by: Ilya Maximets <i.maximets@ovn.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Eelco Chaudron echaudro@redhat.com Reviewed-by: Aaron Conole <aconole@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01ipv4, ipv6: Fix handling of transhdrlen in __ip{,6}_append_data()David Howells
Including the transhdrlen in length is a problem when the packet is partially filled (e.g. something like send(MSG_MORE) happened previously) when appending to an IPv4 or IPv6 packet as we don't want to repeat the transport header or account for it twice. This can happen under some circumstances, such as splicing into an L2TP socket. The symptom observed is a warning in __ip6_append_data(): WARNING: CPU: 1 PID: 5042 at net/ipv6/ip6_output.c:1800 __ip6_append_data.isra.0+0x1be8/0x47f0 net/ipv6/ip6_output.c:1800 that occurs when MSG_SPLICE_PAGES is used to append more data to an already partially occupied skbuff. The warning occurs when 'copy' is larger than the amount of data in the message iterator. This is because the requested length includes the transport header length when it shouldn't. This can be triggered by, for example: sfd = socket(AF_INET6, SOCK_DGRAM, IPPROTO_L2TP); bind(sfd, ...); // ::1 connect(sfd, ...); // ::1 port 7 send(sfd, buffer, 4100, MSG_MORE); sendfile(sfd, dfd, NULL, 1024); Fix this by only adding transhdrlen into the length if the write queue is empty in l2tp_ip6_sendmsg(), analogously to how UDP does things. l2tp_ip_sendmsg() looks like it won't suffer from this problem as it builds the UDP packet itself. Fixes: a32e0eec7042 ("l2tp: introduce L2TPv3 IP encapsulation support for IPv6") Reported-by: syzbot+62cbf263225ae13ff153@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/0000000000001c12b30605378ce8@google.com/ Suggested-by: Willem de Bruijn <willemdebruijn.kernel@gmail.com> Signed-off-by: David Howells <dhowells@redhat.com> cc: Eric Dumazet <edumazet@google.com> cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com> cc: "David S. Miller" <davem@davemloft.net> cc: David Ahern <dsahern@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jakub Kicinski <kuba@kernel.org> cc: netdev@vger.kernel.org cc: bpf@vger.kernel.org cc: syzkaller-bugs@googlegroups.com Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01neighbour: fix data-races around n->outputEric Dumazet
n->output field can be read locklessly, while a writer might change the pointer concurrently. Add missing annotations to prevent load-store tearing. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: l2tp_eth: use generic dev->stats fieldsEric Dumazet
Core networking has opt-in atomic variant of dev->stats, simply use DEV_STATS_INC(), DEV_STATS_ADD() and DEV_STATS_READ(). v2: removed @priv local var in l2tp_eth_dev_recv() (Simon) Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net: fix possible store tearing in neigh_periodic_work()Eric Dumazet
While looking at a related syzbot report involving neigh_periodic_work(), I found that I forgot to add an annotation when deleting an RCU protected item from a list. Readers use rcu_deference(*np), we need to use either rcu_assign_pointer() or WRITE_ONCE() on writer side to prevent store tearing. I use rcu_assign_pointer() to have lockdep support, this was the choice made in neigh_flush_dev(). Fixes: 767e97e1e0db ("neigh: RCU conversion of struct neighbour") Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01Merge tag 'for-net-2023-09-20' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth bluetooth pull request for net: - Fix handling of HCI_QUIRK_STRICT_DUPLICATE_FILTER - Fix handling of listen for ISO unicast - Fix build warnings - Fix leaking content of local_codecs - Add shutdown function for QCA6174 - Delete unused hci_req_prepare_suspend() declaration - Fix hci_link_tx_to RCU lock usage - Avoid redundant authentication Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net_sched: sch_fq: always garbage collectEric Dumazet
FQ performs garbage collection at enqueue time, and only if number of flows is above a given threshold, which is hit after the qdisc has been used a bit. Since an RB-tree traversal is needed to locate a flow, it makes sense to perform gc all the time, to keep rb-trees smaller. This reduces by 50 % average storage costs in FQ, and avoids 1 cache line miss at enqueue time when fast path added in prior patch can not be used. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net_sched: sch_fq: add fast path for mostly idle qdiscEric Dumazet
TCQ_F_CAN_BYPASS can be used by few qdiscs. Idea is that if we queue a packet to an empty qdisc, following dequeue() would pick it immediately. FQ can not use the generic TCQ_F_CAN_BYPASS code, because some additional checks need to be performed. This patch adds a similar fast path to FQ. Most of the time, qdisc is not throttled, and many packets can avoid bringing/touching at least four cache lines, and consuming 128bytes of memory to store the state of a flow. After this patch, netperf can send UDP packets about 13 % faster, and pktgen goes 30 % faster (when FQ is in the way), on a fast NIC. TCP traffic is also improved, thanks to a reduction of cache line misses. I have measured a 5 % increase of throughput on a tcp_rr intensive workload. tc -s -d qd sh dev eth1 ... qdisc fq 8004: parent 1:2 limit 10000p flow_limit 100p buckets 1024 orphan_mask 1023 quantum 3028b initial_quantum 15140b low_rate_threshold 550Kbit refill_delay 40ms timer_slack 10us horizon 10s horizon_drop Sent 5646784384 bytes 1985161 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 flows 122 (inactive 122 throttled 0) gc 0 highprio 0 fastpath 659990 throttled 27762 latency 8.57us Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net_sched: sch_fq: change how @inactive is trackedEric Dumazet
Currently, when one fq qdisc has no more packets to send, it can still have some flows stored in its RR lists (q->new_flows & q->old_flows) This was a design choice, but what is a bit disturbing is that the inactive_flows counter does not include the count of empty flows in RR lists. As next patch needs to know better if there are active flows, this change makes inactive_flows exact. Before the patch, following command on an empty qdisc could have returned: lpaa17:~# tc -s -d qd sh dev eth1 | grep inactive flows 1322 (inactive 1316 throttled 0) flows 1330 (inactive 1325 throttled 0) flows 1193 (inactive 1190 throttled 0) flows 1208 (inactive 1202 throttled 0) After the patch, we now have: lpaa17:~# tc -s -d qd sh dev eth1 | grep inactive flows 1322 (inactive 1322 throttled 0) flows 1330 (inactive 1330 throttled 0) flows 1193 (inactive 1193 throttled 0) flows 1208 (inactive 1208 throttled 0) Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01net_sched: sch_fq: struct sched_data reorgEric Dumazet
q->flows can be often modified, and q->timer_slack is read mostly. Exchange the two fields, so that cache line countaining quantum, initial_quantum, and other critical parameters stay clean (read-mostly). Move q->watchdog next to q->stat_throttled Add comments explaining how the structure is split in three different parts. pahole output before the patch: struct fq_sched_data { struct fq_flow_head new_flows; /* 0 0x10 */ struct fq_flow_head old_flows; /* 0x10 0x10 */ struct rb_root delayed; /* 0x20 0x8 */ u64 time_next_delayed_flow; /* 0x28 0x8 */ u64 ktime_cache; /* 0x30 0x8 */ unsigned long unthrottle_latency_ns; /* 0x38 0x8 */ /* --- cacheline 1 boundary (64 bytes) --- */ struct fq_flow internal __attribute__((__aligned__(64))); /* 0x40 0x80 */ /* XXX last struct has 16 bytes of padding */ /* --- cacheline 3 boundary (192 bytes) --- */ u32 quantum; /* 0xc0 0x4 */ u32 initial_quantum; /* 0xc4 0x4 */ u32 flow_refill_delay; /* 0xc8 0x4 */ u32 flow_plimit; /* 0xcc 0x4 */ unsigned long flow_max_rate; /* 0xd0 0x8 */ u64 ce_threshold; /* 0xd8 0x8 */ u64 horizon; /* 0xe0 0x8 */ u32 orphan_mask; /* 0xe8 0x4 */ u32 low_rate_threshold; /* 0xec 0x4 */ struct rb_root * fq_root; /* 0xf0 0x8 */ u8 rate_enable; /* 0xf8 0x1 */ u8 fq_trees_log; /* 0xf9 0x1 */ u8 horizon_drop; /* 0xfa 0x1 */ /* XXX 1 byte hole, try to pack */ <bad> u32 flows; /* 0xfc 0x4 */ /* --- cacheline 4 boundary (256 bytes) --- */ u32 inactive_flows; /* 0x100 0x4 */ u32 throttled_flows; /* 0x104 0x4 */ u64 stat_gc_flows; /* 0x108 0x8 */ u64 stat_internal_packets; /* 0x110 0x8 */ u64 stat_throttled; /* 0x118 0x8 */ u64 stat_ce_mark; /* 0x120 0x8 */ u64 stat_horizon_drops; /* 0x128 0x8 */ u64 stat_horizon_caps; /* 0x130 0x8 */ u64 stat_flows_plimit; /* 0x138 0x8 */ /* --- cacheline 5 boundary (320 bytes) --- */ u64 stat_pkts_too_long; /* 0x140 0x8 */ u64 stat_allocation_errors; /* 0x148 0x8 */ <bad> u32 timer_slack; /* 0x150 0x4 */ /* XXX 4 bytes hole, try to pack */ struct qdisc_watchdog watchdog; /* 0x158 0x48 */ /* size: 448, cachelines: 7, members: 34 */ /* sum members: 411, holes: 2, sum holes: 5 */ /* padding: 32 */ /* paddings: 1, sum paddings: 16 */ /* forced alignments: 1 */ }; pahole output after the patch: struct fq_sched_data { struct fq_flow_head new_flows; /* 0 0x10 */ struct fq_flow_head old_flows; /* 0x10 0x10 */ struct rb_root delayed; /* 0x20 0x8 */ u64 time_next_delayed_flow; /* 0x28 0x8 */ u64 ktime_cache; /* 0x30 0x8 */ unsigned long unthrottle_latency_ns; /* 0x38 0x8 */ /* --- cacheline 1 boundary (64 bytes) --- */ struct fq_flow internal __attribute__((__aligned__(64))); /* 0x40 0x80 */ /* XXX last struct has 16 bytes of padding */ /* --- cacheline 3 boundary (192 bytes) --- */ u32 quantum; /* 0xc0 0x4 */ u32 initial_quantum; /* 0xc4 0x4 */ u32 flow_refill_delay; /* 0xc8 0x4 */ u32 flow_plimit; /* 0xcc 0x4 */ unsigned long flow_max_rate; /* 0xd0 0x8 */ u64 ce_threshold; /* 0xd8 0x8 */ u64 horizon; /* 0xe0 0x8 */ u32 orphan_mask; /* 0xe8 0x4 */ u32 low_rate_threshold; /* 0xec 0x4 */ struct rb_root * fq_root; /* 0xf0 0x8 */ u8 rate_enable; /* 0xf8 0x1 */ u8 fq_trees_log; /* 0xf9 0x1 */ u8 horizon_drop; /* 0xfa 0x1 */ /* XXX 1 byte hole, try to pack */ <good> u32 timer_slack; /* 0xfc 0x4 */ /* --- cacheline 4 boundary (256 bytes) --- */ <good> u32 flows; /* 0x100 0x4 */ u32 inactive_flows; /* 0x104 0x4 */ u32 throttled_flows; /* 0x108 0x4 */ /* XXX 4 bytes hole, try to pack */ u64 stat_throttled; /* 0x110 0x8 */ <better> struct qdisc_watchdog watchdog; /* 0x118 0x48 */ /* --- cacheline 5 boundary (320 bytes) was 32 bytes ago --- */ u64 stat_gc_flows; /* 0x160 0x8 */ u64 stat_internal_packets; /* 0x168 0x8 */ u64 stat_ce_mark; /* 0x170 0x8 */ u64 stat_horizon_drops; /* 0x178 0x8 */ /* --- cacheline 6 boundary (384 bytes) --- */ u64 stat_horizon_caps; /* 0x180 0x8 */ u64 stat_flows_plimit; /* 0x188 0x8 */ u64 stat_pkts_too_long; /* 0x190 0x8 */ u64 stat_allocation_errors; /* 0x198 0x8 */ /* Force padding: */ u64 :64; u64 :64; u64 :64; u64 :64; /* size: 448, cachelines: 7, members: 34 */ /* sum members: 411, holes: 2, sum holes: 5 */ /* padding: 32 */ /* paddings: 1, sum paddings: 16 */ /* forced alignments: 1 */ }; Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01tcp: derive delack_max from rto_minEric Dumazet
While BPF allows to set icsk->->icsk_delack_max and/or icsk->icsk_rto_min, we have an ip route attribute (RTAX_RTO_MIN) to be able to tune rto_min, but nothing to consequently adjust max delayed ack, which vary from 40ms to 200 ms (TCP_DELACK_{MIN|MAX}). This makes RTAX_RTO_MIN of almost no practical use, unless customers are in big trouble. Modern days datacenter communications want to set rto_min to ~5 ms, and the max delayed ack one jiffie smaller to avoid spurious retransmits. After this patch, an "rto_min 5" route attribute will effectively lower max delayed ack timers to 4 ms. Note in the following ss output, "rto:6 ... ato:4" $ ss -temoi dst XXXXXX State Recv-Q Send-Q Local Address:Port Peer Address:Port Process ESTAB 0 0 [2002:a05:6608:295::]:52950 [2002:a05:6608:297::]:41597 ino:255134 sk:1001 <-> skmem:(r0,rb1707063,t872,tb262144,f0,w0,o0,bl0,d0) ts sack cubic wscale:8,8 rto:6 rtt:0.02/0.002 ato:4 mss:4096 pmtu:4500 rcvmss:536 advmss:4096 cwnd:10 bytes_sent:54823160 bytes_acked:54823121 bytes_received:54823120 segs_out:1370582 segs_in:1370580 data_segs_out:1370579 data_segs_in:1370578 send 16.4Gbps pacing_rate 32.6Gbps delivery_rate 1.72Gbps delivered:1370579 busy:26920ms unacked:1 rcv_rtt:34.615 rcv_space:65920 rcv_ssthresh:65535 minrtt:0.015 snd_wnd:65536 While we could argue this patch fixes a bug with RTAX_RTO_MIN, I do not add a Fixes: tag, so that we can soak it a bit before asking backports to stable branches. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-01ipsec: Select CRYPTO_AEADHerbert Xu
Select CRYPTO_AEAD so that crypto_has_aead is available. Fixes: 1383e2ab102c ("ipsec: Stop using crypto_has_alg") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202309202112.33V1Ezb1-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-09-29wifi: mac80211: add back SPDX identifierJohannes Berg
Looks like I lost that by accident, add it back. Fixes: 076fc8775daf ("wifi: cfg80211: remove wdev mutex") Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-29wifi: mac80211: fix ieee80211_drop_unencrypted_mgmt return type/valueJohannes Berg
Somehow, I managed to botch this and pretty much completely break wifi. My original patch did contain these changes, but I seem to have lost them before sending to the list. Fix it now. Reported-and-tested-by: Kalle Valo <kvalo@kernel.org> Fixes: 6c02fab72429 ("wifi: mac80211: split ieee80211_drop_unencrypted_mgmt() return value") Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-29bpf, sockmap: Reject sk_msg egress redirects to non-TCP socketsJakub Sitnicki
With a SOCKMAP/SOCKHASH map and an sk_msg program user can steer messages sent from one TCP socket (s1) to actually egress from another TCP socket (s2): tcp_bpf_sendmsg(s1) // = sk_prot->sendmsg tcp_bpf_send_verdict(s1) // __SK_REDIRECT case tcp_bpf_sendmsg_redir(s2) tcp_bpf_push_locked(s2) tcp_bpf_push(s2) tcp_rate_check_app_limited(s2) // expects tcp_sock tcp_sendmsg_locked(s2) // ditto There is a hard-coded assumption in the call-chain, that the egress socket (s2) is a TCP socket. However in commit 122e6c79efe1 ("sock_map: Update sock type checks for UDP") we have enabled redirects to non-TCP sockets. This was done for the sake of BPF sk_skb programs. There was no indention to support sk_msg send-to-egress use case. As a result, attempts to send-to-egress through a non-TCP socket lead to a crash due to invalid downcast from sock to tcp_sock: BUG: kernel NULL pointer dereference, address: 000000000000002f ... Call Trace: <TASK> ? show_regs+0x60/0x70 ? __die+0x1f/0x70 ? page_fault_oops+0x80/0x160 ? do_user_addr_fault+0x2d7/0x800 ? rcu_is_watching+0x11/0x50 ? exc_page_fault+0x70/0x1c0 ? asm_exc_page_fault+0x27/0x30 ? tcp_tso_segs+0x14/0xa0 tcp_write_xmit+0x67/0xce0 __tcp_push_pending_frames+0x32/0xf0 tcp_push+0x107/0x140 tcp_sendmsg_locked+0x99f/0xbb0 tcp_bpf_push+0x19d/0x3a0 tcp_bpf_sendmsg_redir+0x55/0xd0 tcp_bpf_send_verdict+0x407/0x550 tcp_bpf_sendmsg+0x1a1/0x390 inet_sendmsg+0x6a/0x70 sock_sendmsg+0x9d/0xc0 ? sockfd_lookup_light+0x12/0x80 __sys_sendto+0x10e/0x160 ? syscall_enter_from_user_mode+0x20/0x60 ? __this_cpu_preempt_check+0x13/0x20 ? lockdep_hardirqs_on+0x82/0x110 __x64_sys_sendto+0x1f/0x30 do_syscall_64+0x38/0x90 entry_SYSCALL_64_after_hwframe+0x63/0xcd Reject selecting a non-TCP sockets as redirect target from a BPF sk_msg program to prevent the crash. When attempted, user will receive an EACCES error from send/sendto/sendmsg() syscall. Fixes: 122e6c79efe1 ("sock_map: Update sock type checks for UDP") Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20230920102055.42662-1-jakub@cloudflare.com
2023-09-29bpf, sockmap: Do not inc copied_seq when PEEK flag setJohn Fastabend
When data is peek'd off the receive queue we shouldn't considered it copied from tcp_sock side. When we increment copied_seq this will confuse tcp_data_ready() because copied_seq can be arbitrarily increased. From application side it results in poll() operations not waking up when expected. Notice tcp stack without BPF recvmsg programs also does not increment copied_seq. We broke this when we moved copied_seq into recvmsg to only update when actual copy was happening. But, it wasn't working correctly either before because the tcp_data_ready() tried to use the copied_seq value to see if data was read by user yet. See fixes tags. Fixes: e5c6de5fa0258 ("bpf, sockmap: Incorrectly handling copied_seq") Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20230926035300.135096-3-john.fastabend@gmail.com
2023-09-29bpf: tcp_read_skb needs to pop skb regardless of seqJohn Fastabend
Before fix e5c6de5fa0258 tcp_read_skb() would increment the tp->copied-seq value. This (as described in the commit) would cause an error for apps because once that is incremented the application might believe there is no data to be read. Then some apps would stall or abort believing no data is available. However, the fix is incomplete because it introduces another issue in the skb dequeue. The loop does tcp_recv_skb() in a while loop to consume as many skbs as possible. The problem is the call is ... tcp_recv_skb(sk, seq, &offset) ... where 'seq' is: u32 seq = tp->copied_seq; Now we can hit a case where we've yet incremented copied_seq from BPF side, but then tcp_recv_skb() fails this test ... if (offset < skb->len || (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)) ... so that instead of returning the skb we call tcp_eat_recv_skb() which frees the skb. This is because the routine believes the SKB has been collapsed per comment: /* This looks weird, but this can happen if TCP collapsing * splitted a fat GRO packet, while we released socket lock * in skb_splice_bits() */ This can't happen here we've unlinked the full SKB and orphaned it. Anyways it would confuse any BPF programs if the data were suddenly moved underneath it. To fix this situation do simpler operation and just skb_peek() the data of the queue followed by the unlink. It shouldn't need to check this condition and tcp_read_skb() reads entire skbs so there is no need to handle the 'offset!=0' case as we would see in tcp_read_sock(). Fixes: e5c6de5fa0258 ("bpf, sockmap: Incorrectly handling copied_seq") Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20230926035300.135096-2-john.fastabend@gmail.com
2023-09-28netfilter: nf_tables: Utilize NLA_POLICY_NESTED_ARRAYPhil Sutter
Mark attributes which are supposed to be arrays of nested attributes with known content as such. Originally suggested for NFTA_RULE_EXPRESSIONS only, but does apply to others as well. Suggested-by: Florian Westphal <fw@strlen.de> Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Florian Westphal <fw@strlen.de>
2023-09-28netfilter: nf_tables: missing extended netlink error in lookup functionsPablo Neira Ayuso
Set netlink extended error reporting for several lookup functions which allows userspace to infer what is the error cause. Reported-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Florian Westphal <fw@strlen.de>
2023-09-28netfilter: nf_nat: undo erroneous tcp edemux lookup after port clashFlorian Westphal
In commit 03a3ca37e4c6 ("netfilter: nf_nat: undo erroneous tcp edemux lookup") I fixed a problem with source port clash resolution and DNAT. A very similar issue exists with REDIRECT (DNAT to local address) and port rewrites. Consider two port redirections done at prerouting hook: -p tcp --port 1111 -j REDIRECT --to-ports 80 -p tcp --port 1112 -j REDIRECT --to-ports 80 Its possible, however unlikely, that we get two connections sharing the same source port, i.e. saddr:12345 -> daddr:1111 saddr:12345 -> daddr:1112 This works on sender side because destination address is different. After prerouting, nat will change first syn packet to saddr:12345 -> daddr:80, stack will send a syn-ack back and 3whs completes. The second syn however will result in a source port clash: after dnat rewrite, new syn has saddr:12345 -> daddr:80 This collides with the reply direction of the first connection. The NAT engine will handle this in the input nat hook by also altering the source port, so we get for example saddr:13535 -> daddr:80 This allows the stack to send back a syn-ack to that address. Reverse NAT during POSTROUTING will rewrite the packet to daddr:1112 -> saddr:12345 again. Tuple will be unique on-wire and peer can process it normally. Problem is when ACK packet comes in: After prerouting, packet payload is mangled to saddr:12345 -> daddr:80. Early demux will assign the 3whs-completing ACK skb to the first connections' established socket. This will then elicit a challenge ack from the first connections' socket rather than complete the connection of the second. The second connection can never complete. Detect this condition by checking if the associated sockets port matches the conntrack entries reply tuple. If it doesn't, then input source address translation mangled payload after early demux and the found sk is incorrect. Discard this sk and let TCP stack do another lookup. Signed-off-by: Florian Westphal <fw@strlen.de>
2023-09-28pktgen: Introducing 'SHARED' flag for testing with non-shared skbLiang Chen
Currently, skbs generated by pktgen always have their reference count incremented before transmission, causing their reference count to be always greater than 1, leading to two issues: 1. Only the code paths for shared skbs can be tested. 2. In certain situations, skbs can only be released by pktgen. To enhance testing comprehensiveness, we are introducing the "SHARED" flag to indicate whether an SKB is shared. This flag is enabled by default, aligning with the current behavior. However, disabling this flag allows skbs with a reference count of 1 to be transmitted. So we can test non-shared skbs and code paths where skbs are released within the stack. Signed-off-by: Liang Chen <liangchen.linux@gmail.com> Reviewed-by: Benjamin Poirier <bpoirier@nvidia.com> Link: https://lore.kernel.org/r/20230920125658.46978-2-liangchen.linux@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-09-28pktgen: Automate flag enumeration for unknown flag handlingLiang Chen
When specifying an unknown flag, it will print all available flags. Currently, these flags are provided as fixed strings, which requires manual updates when flags change. Replacing it with automated flag enumeration. Signed-off-by: Liang Chen <liangchen.linux@gmail.com> Signed-off-by: Benjamin Poirier <bpoirier@nvidia.com> Link: https://lore.kernel.org/r/20230920125658.46978-1-liangchen.linux@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-09-27SUNRPC/TLS: Lock the lower_xprt during the tls handshakeAnna Schumaker
Otherwise we run the risk of having the lower_xprt freed from underneath us, causing an oops that looks like this: [ 224.150698] BUG: kernel NULL pointer dereference, address: 0000000000000018 [ 224.150951] #PF: supervisor read access in kernel mode [ 224.151117] #PF: error_code(0x0000) - not-present page [ 224.151278] PGD 0 P4D 0 [ 224.151361] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 224.151499] CPU: 2 PID: 99 Comm: kworker/u10:6 Not tainted 6.6.0-rc3-g6465e260f487 #41264 a00b0960990fb7bc6d6a330ee03588b67f08a47b [ 224.151977] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 2/2/2022 [ 224.152216] Workqueue: xprtiod xs_tcp_tls_setup_socket [sunrpc] [ 224.152434] RIP: 0010:xs_tcp_tls_setup_socket+0x3cc/0x7e0 [sunrpc] [ 224.152643] Code: 00 00 48 8b 7c 24 08 e9 f3 01 00 00 48 83 7b c0 00 0f 85 d2 01 00 00 49 8d 84 24 f8 05 00 00 48 89 44 24 10 48 8b 00 48 89 c5 <4c> 8b 68 18 66 41 83 3f 0a 75 71 45 31 ff 4c 89 ef 31 f6 e8 5c 76 [ 224.153246] RSP: 0018:ffffb00ec060fd18 EFLAGS: 00010246 [ 224.153427] RAX: 0000000000000000 RBX: ffff8c06c2e53e40 RCX: 0000000000000001 [ 224.153652] RDX: ffff8c073bca2408 RSI: 0000000000000282 RDI: ffff8c06c259ee00 [ 224.153868] RBP: 0000000000000000 R08: ffffffff9da55aa0 R09: 0000000000000001 [ 224.154084] R10: 00000034306c30f1 R11: 0000000000000002 R12: ffff8c06c2e51800 [ 224.154300] R13: ffff8c06c355d400 R14: 0000000004208160 R15: ffff8c06c2e53820 [ 224.154521] FS: 0000000000000000(0000) GS:ffff8c073bd00000(0000) knlGS:0000000000000000 [ 224.154763] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 224.154940] CR2: 0000000000000018 CR3: 0000000062c1e000 CR4: 0000000000750ee0 [ 224.155157] PKRU: 55555554 [ 224.155244] Call Trace: [ 224.155325] <TASK> [ 224.155395] ? __die_body+0x68/0xb0 [ 224.155507] ? page_fault_oops+0x34c/0x3a0 [ 224.155635] ? _raw_spin_unlock_irqrestore+0xe/0x40 [ 224.155793] ? exc_page_fault+0x7a/0x1b0 [ 224.155916] ? asm_exc_page_fault+0x26/0x30 [ 224.156047] ? xs_tcp_tls_setup_socket+0x3cc/0x7e0 [sunrpc ae3a15912ae37fd51dafbdbc2dbd069117f8f5c8] [ 224.156367] ? xs_tcp_tls_setup_socket+0x2fe/0x7e0 [sunrpc ae3a15912ae37fd51dafbdbc2dbd069117f8f5c8] [ 224.156697] ? __pfx_xs_tls_handshake_done+0x10/0x10 [sunrpc ae3a15912ae37fd51dafbdbc2dbd069117f8f5c8] [ 224.157013] process_scheduled_works+0x24e/0x450 [ 224.157158] worker_thread+0x21c/0x2d0 [ 224.157275] ? __pfx_worker_thread+0x10/0x10 [ 224.157409] kthread+0xe8/0x110 [ 224.157510] ? __pfx_kthread+0x10/0x10 [ 224.157628] ret_from_fork+0x37/0x50 [ 224.157741] ? __pfx_kthread+0x10/0x10 [ 224.157859] ret_from_fork_asm+0x1b/0x30 [ 224.157983] </TASK> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2023-09-27Revert "SUNRPC dont update timeout value on connection reset"Trond Myklebust
This reverts commit 88428cc4ae7abcc879295fbb19373dd76aad2bdd. The problem this commit is intended to fix was comprehensively fixed in commit 7de62bc09fe6 ("SUNRPC dont update timeout value on connection reset"). Since then, this commit has been preventing the correct timeout of soft mounted requests. Cc: stable@vger.kernel.org # 5.9.x: 09252177d5f9: SUNRPC: Handle major timeout in xprt_adjust_timeout() Cc: stable@vger.kernel.org # 5.9.x: 7de62bc09fe6: SUNRPC dont update timeout value on connection reset Cc: stable@vger.kernel.org # 5.9.x Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2023-09-27SUNRPC: Fail quickly when server does not recognize TLSChuck Lever
rpcauth_checkverf() should return a distinct error code when a server recognizes the AUTH_TLS probe but does not support TLS so that the client's header decoder can respond appropriately and quickly. No retries are necessary is in this case, since the server has already affirmatively answered "TLS is unsupported". Suggested-by: Trond Myklebust <trondmy@hammerspace.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2023-09-26wifi: mac80211: expand __ieee80211_data_to_8023() statusJohannes Berg
Make __ieee80211_data_to_8023() return more individual drop reasons instead of just doing RX_DROP_U_INVALID_8023. Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-26wifi: mac80211: split ieee80211_drop_unencrypted_mgmt() return valueJohannes Berg
This has many different reasons, split the return value into the individual reasons for better traceability. Also, since symbolic tracing doesn't work for these, add a few comments for the numbering. Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-26wifi: mac80211: remove RX_DROP_UNUSABLEJohannes Berg
Convert all instances of RX_DROP_UNUSABLE to indicate a better reason, and then remove RX_DROP_UNUSABLE. Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-26wifi: mac80211: fix check for unusable RX resultJohannes Berg
If we just check "result & RX_DROP_UNUSABLE", this really only works by accident, because SKB_DROP_REASON_SUBSYS_MAC80211_UNUSABLE got to have the value 1, and SKB_DROP_REASON_SUBSYS_MAC80211_MONITOR is 2. Fix this to really check the entire subsys mask for the value, so it doesn't matter what the subsystem value is. Fixes: 7f4e09700bdc ("wifi: mac80211: report all unusable beacon frames") Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-26wifi: cfg80211: add local_state_change to deauth traceJohannes Berg
Add the local_state_change request to the deauth trace for easier debugging. Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-26wifi: mac80211: Create resources for disabled linksBenjamin Berg
When associating to an MLD AP, links may be disabled. Create all resources associated with a disabled link so that we can later enable it without having to create these resources on the fly. Fixes: 6d543b34dbcf ("wifi: mac80211: Support disabled links during association") Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://lore.kernel.org/r/20230925173028.f9afdb26f6c7.I4e6e199aaefc1bf017362d64f3869645fa6830b5@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-26wifi: cfg80211: avoid leaking stack data into traceBenjamin Berg
If the structure is not initialized then boolean types might be copied into the tracing data without being initialised. This causes data from the stack to leak into the trace and also triggers a UBSAN failure which can easily be avoided here. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Link: https://lore.kernel.org/r/20230925171855.a9271ef53b05.I8180bae663984c91a3e036b87f36a640ba409817@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-25wifi: mac80211: allow transmitting EAPOL frames with tainted keyWen Gong
Lower layer device driver stop/wake TX by calling ieee80211_stop_queue()/ ieee80211_wake_queue() while hw scan. Sometimes hw scan and PTK rekey are running in parallel, when M4 sent from wpa_supplicant arrive while the TX queue is stopped, then the M4 will pending send, and then new key install from wpa_supplicant. After TX queue wake up by lower layer device driver, the M4 will be dropped by below call stack. When key install started, the current key flag is set KEY_FLAG_TAINTED in ieee80211_pairwise_rekey(), and then mac80211 wait key install complete by lower layer device driver. Meanwhile ieee80211_tx_h_select_key() will return TX_DROP for the M4 in step 12 below, and then ieee80211_free_txskb() called by ieee80211_tx_dequeue(), so the M4 will not send and free, then the rekey process failed becaue AP not receive M4. Please see details in steps below. There are a interval between KEY_FLAG_TAINTED set for current key flag and install key complete by lower layer device driver, the KEY_FLAG_TAINTED is set in this interval, all packet including M4 will be dropped in this interval, the interval is step 8~13 as below. issue steps: TX thread install key thread 1. stop_queue -idle- 2. sending M4 -idle- 3. M4 pending -idle- 4. -idle- starting install key from wpa_supplicant 5. -idle- =>ieee80211_key_replace() 6. -idle- =>ieee80211_pairwise_rekey() and set currently key->flags |= KEY_FLAG_TAINTED 7. -idle- =>ieee80211_key_enable_hw_accel() 8. -idle- =>drv_set_key() and waiting key install complete from lower layer device driver 9. wake_queue -waiting state- 10. re-sending M4 -waiting state- 11. =>ieee80211_tx_h_select_key() -waiting state- 12. drop M4 by KEY_FLAG_TAINTED -waiting state- 13. -idle- install key complete with success/fail success: clear flag KEY_FLAG_TAINTED fail: start disconnect Hence add check in step 11 above to allow the EAPOL send out in the interval. If lower layer device driver use the old key/cipher to encrypt the M4, then AP received/decrypt M4 correctly, after M4 send out, lower layer device driver install the new key/cipher to hardware and return success. If lower layer device driver use new key/cipher to send the M4, then AP will/should drop the M4, then it is same result with this issue, AP will/ should kick out station as well as this issue. issue log: kworker/u16:4-5238 [000] 6456.108926: stop_queue: phy1 queue:0, reason:0 wpa_supplicant-961 [003] 6456.119737: rdev_tx_control_port: wiphy_name=phy1 name=wlan0 ifindex=6 dest=ARRAY[9e, 05, 31, 20, 9b, d0] proto=36488 unencrypted=0 wpa_supplicant-961 [003] 6456.119839: rdev_return_int_cookie: phy1, returned 0, cookie: 504 wpa_supplicant-961 [003] 6456.120287: rdev_add_key: phy1, netdev:wlan0(6), key_index: 0, mode: 0, pairwise: true, mac addr: 9e:05:31:20:9b:d0 wpa_supplicant-961 [003] 6456.120453: drv_set_key: phy1 vif:wlan0(2) sta:9e:05:31:20:9b:d0 cipher:0xfac04, flags=0x9, keyidx=0, hw_key_idx=0 kworker/u16:9-3829 [001] 6456.168240: wake_queue: phy1 queue:0, reason:0 kworker/u16:9-3829 [001] 6456.168255: drv_wake_tx_queue: phy1 vif:wlan0(2) sta:9e:05:31:20:9b:d0 ac:0 tid:7 kworker/u16:9-3829 [001] 6456.168305: cfg80211_control_port_tx_status: wdev(1), cookie: 504, ack: false wpa_supplicant-961 [003] 6459.167982: drv_return_int: phy1 - -110 issue call stack: nl80211_frame_tx_status+0x230/0x340 [cfg80211] cfg80211_control_port_tx_status+0x1c/0x28 [cfg80211] ieee80211_report_used_skb+0x374/0x3e8 [mac80211] ieee80211_free_txskb+0x24/0x40 [mac80211] ieee80211_tx_dequeue+0x644/0x954 [mac80211] ath10k_mac_tx_push_txq+0xac/0x238 [ath10k_core] ath10k_mac_op_wake_tx_queue+0xac/0xe0 [ath10k_core] drv_wake_tx_queue+0x80/0x168 [mac80211] __ieee80211_wake_txqs+0xe8/0x1c8 [mac80211] _ieee80211_wake_txqs+0xb4/0x120 [mac80211] ieee80211_wake_txqs+0x48/0x80 [mac80211] tasklet_action_common+0xa8/0x254 tasklet_action+0x2c/0x38 __do_softirq+0xdc/0x384 Signed-off-by: Wen Gong <quic_wgong@quicinc.com> Link: https://lore.kernel.org/r/20230801064751.25803-1-quic_wgong@quicinc.com Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-25wifi: mac80211: reject MLO channel configuration if not supportedBenjamin Berg
Reject configuring a channel for MLO if either EHT is not supported or the BSS does not have the correct ML element. This avoids trying to do a multi-link association with a misconfigured AP. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Signed-off-by: Gregory Greenman <gregory.greenman@intel.com> Link: https://lore.kernel.org/r/20230920211508.80c3b8e5a344.Iaa2d466ee6280994537e1ae7ab9256a27934806f@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-25wifi: mac80211: report per-link error during associationBenjamin Berg
With this cfg80211 can report the link that caused the error to userspace which is then able to react to it by e.g. removing the link from the association and retrying. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Signed-off-by: Gregory Greenman <gregory.greenman@intel.com> Link: https://lore.kernel.org/r/20230920211508.275fc7f5c426.I8086c0fdbbf92537d6a8b8e80b33387fcfd5553d@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-25wifi: cfg80211: report per-link errors during associationBenjamin Berg
When one of the links (other than the assoc_link) is misconfigured and cannot work the association will fail. However, userspace was not able to tell that the operation only failed because of a problem with one of the links. Fix this, by allowing the driver to set a per-link error code and reporting the (first) offending link by setting the bad_attr accordingly. This only allows us to report the first error, but that is sufficient for userspace to e.g. remove the offending link and retry. Signed-off-by: Benjamin Berg <benjamin.berg@intel.com> Signed-off-by: Gregory Greenman <gregory.greenman@intel.com> Link: https://lore.kernel.org/r/20230920211508.ebe63c0bd513.I40799998f02bf987acee1501a2522dc98bb6eb5a@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-25wifi: mac80211: support antenna control in injectionJohannes Berg
Support antenna control for injection by parsing the antenna radiotap field (which may be presented multiple times) and telling the driver about the resulting antenna bitmap. Of course there's no guarantee the driver will actually honour this, just like any other injection control. If misconfigured, i.e. the injected HT/VHT MCS needs more chains than antennas are configured, the bitmap is reset to zero, indicating no selection. For now this is only set up for two anntenas so we keep more free bits, but that can be trivially extended if any driver implements support for it that can deal with hardware with more antennas. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Gregory Greenman <gregory.greenman@intel.com> Link: https://lore.kernel.org/r/20230920211508.f71001aa4da9.I00ccb762a806ea62bc3d728fa3a0d29f4f285eeb@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-25wifi: mac80211: support handling of advertised TID-to-link mappingAyala Beker
Support handling of advertised TID-to-link mapping elements received in a beacon. These elements are used by AP MLD to disable specific links and force all clients to stop using these links. By default if no TID-to-link mapping is advertised, all TIDs shall be mapped to all links. Signed-off-by: Ayala Beker <ayala.beker@intel.com> Signed-off-by: Gregory Greenman <gregory.greenman@intel.com> Link: https://lore.kernel.org/r/20230920211508.623c4b692ff9.Iab0a6f561d85b8ab6efe541590985a2b6e9e74aa@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-25wifi: mac80211: add support for parsing TID to Link mapping elementAyala Beker
Add the relevant definitions for TID to Link mapping element according to the P802.11be_D4.0. Signed-off-by: Ayala Beker <ayala.beker@intel.com> Signed-off-by: Gregory Greenman <gregory.greenman@intel.com> Link: https://lore.kernel.org/r/20230920211508.9ea9b0b4412a.I2281ab2c70e8b43a39032dc115db6a80f1f0b3f4@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-09-25wifi: mac80211: Notify the low level driver on change in MLO valid linksIlan Peer
Notify the low level driver when there is change in the valid links. Signed-off-by: Ilan Peer <ilan.peer@intel.com> Signed-off-by: Gregory Greenman <gregory.greenman@intel.com> Link: https://lore.kernel.org/r/20230920211508.4fc85b0a51b0.I64238e0e892709a2bd4764b3bca93cdcf021e2fd@changeid Signed-off-by: Johannes Berg <johannes.berg@intel.com>