summaryrefslogtreecommitdiff
path: root/net/ipv4
AgeCommit message (Collapse)Author
2020-11-14tcp: uninline tcp_stream_memory_free()Eric Dumazet
Both IPv4 and IPv6 needs it via a function pointer. Following patch will avoid the indirect call. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-14IPv4: RTM_GETROUTE: Add RTA_ENCAP to resultOliver Herms
This patch adds an IPv4 routes encapsulation attribute to the result of netlink RTM_GETROUTE requests (e.g. ip route get 192.0.2.1). Signed-off-by: Oliver Herms <oliver.peter.herms@gmail.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20201113085517.GA1307262@tws Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-12Merge https://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-12net: ipconfig: Avoid spurious blank lines in boot logThierry Reding
When dumping the name and NTP servers advertised by DHCP, a blank line is emitted if either of the lists is empty. This can lead to confusing issues such as the blank line getting flagged as warning. This happens because the blank line is the result of pr_cont("\n") and that may see its level corrupted by some other driver concurrently writing to the console. Fix this by making sure that the terminating newline is only emitted if at least one entry in the lists was printed before. Reported-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201110073757.1284594-1-thierry.reding@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-12net: udp: remove redundant initialization in udp_gro_completeMenglong Dong
The initialization for 'err' with '-ENOSYS' is redundant and can be removed, as it is updated soon and not used. Changes since v1: - Move the err declaration below struct sock *sk Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/5faa01d5.1c69fb81.8451c.cb5b@mx.google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-12net: udp: fix IP header access and skb lookup on Fast/frag0 UDP GROAlexander Lobakin
udp{4,6}_lib_lookup_skb() use ip{,v6}_hdr() to get IP header of the packet. While it's probably OK for non-frag0 paths, this helpers will also point to junk on Fast/frag0 GRO when all headers are located in frags. As a result, sk/skb lookup may fail or give wrong results. To support both GRO modes, skb_gro_network_header() might be used. To not modify original functions, add private versions of udp{4,6}_lib_lookup_skb() only to perform correct sk lookups on GRO. Present since the introduction of "application-level" UDP GRO in 4.7-rc1. Misc: replace totally unneeded ternaries with plain ifs. Fixes: a6024562ffd7 ("udp: Add GRO functions to UDP socket") Suggested-by: Willem de Bruijn <willemb@google.com> Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Alexander Lobakin <alobakin@pm.me> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-12net: udp: fix UDP header access on Fast/frag0 UDP GROAlexander Lobakin
UDP GRO uses udp_hdr(skb) in its .gro_receive() callback. While it's probably OK for non-frag0 paths (when all headers or even the entire frame are already in skb head), this inline points to junk when using Fast GRO (napi_gro_frags() or napi_gro_receive() with only Ethernet header in skb head and all the rest in the frags) and breaks GRO packet compilation and the packet flow itself. To support both modes, skb_gro_header_fast() + skb_gro_header_slow() are typically used. UDP even has an inline helper that makes use of them, udp_gro_udphdr(). Use that instead of troublemaking udp_hdr() to get rid of the out-of-order delivers. Present since the introduction of plain UDP GRO in 5.0-rc1. Fixes: e20cf8d3f1f7 ("udp: implement GRO for plain UDP sockets.") Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Alexander Lobakin <alobakin@pm.me> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-11ipv4: Set nexthop flags in a more consistent wayIdo Schimmel
Be more consistent about the way in which the nexthop flags are set and set them in one go. Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20201110102553.1924232-1-idosch@idosch.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-11net: evaluate net.ipvX.conf.all.disable_policy and disable_xfrmVincent Bernat
The disable_policy and disable_xfrm are a per-interface sysctl to disable IPsec policy or encryption on an interface. However, while a "all" variant is exposed, it was a noop since it was never evaluated. We use the usual "or" logic for this kind of sysctls. Signed-off-by: Vincent Bernat <vincent@bernat.ch> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-10inet: udp{4|6}_lib_lookup_skb() skb argument is constEric Dumazet
The skb is needed only to fetch the keys for the lookup. Both functions are used from GRO stack, we do not want accidental modification of the skb. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Alexander Lobakin <alobakin@pm.me> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-10net: Update window_clamp if SOCK_RCVBUF is setMao Wenan
When net.ipv4.tcp_syncookies=1 and syn flood is happened, cookie_v4_check or cookie_v6_check tries to redo what tcp_v4_send_synack or tcp_v6_send_synack did, rsk_window_clamp will be changed if SOCK_RCVBUF is set, which will make rcv_wscale is different, the client still operates with initial window scale and can overshot granted window, the client use the initial scale but local server use new scale to advertise window value, and session work abnormally. Fixes: e88c64f0a425 ("tcp: allow effective reduction of TCP's rcv-buffer via setsockopt") Signed-off-by: Mao Wenan <wenan.mao@linux.alibaba.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/1604967391-123737-1-git-send-email-wenan.mao@linux.alibaba.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-10net: ipv4: remove redundant initialization in inet_rtm_deladdrMenglong Dong
The initialization for 'err' with '-EINVAL' is redundant and can be removed, as it is updated soon. Changes since v1: - Remove redundant empty line Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/20201108010541.12432-1-dong.menglong@zte.com.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: remove ip_tunnel_get_stats64Heiner Kallweit
After having migrated all users remove ip_tunnel_get_stats64(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09ipv4/ipv6: switch to dev_get_tstats64Heiner Kallweit
Replace ip_tunnel_get_stats64() with the new identical core function dev_get_tstats64(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09vti: switch to dev_get_tstats64Heiner Kallweit
Replace ip_tunnel_get_stats64() with the new identical core function dev_get_tstats64(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: udp: remove redundant initialization in udp_dump_oneMenglong Dong
The initialization for 'err' with '-EINVAL' is redundant and can be removed, as it is updated soon and not used. Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/1604644960-48378-2-git-send-email-dong.menglong@zte.com.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: udp: remove redundant initialization in udp_send_skbMenglong Dong
The initialization for 'err' with 0 is redundant and can be removed, as it is updated by ip_send_skb and not used before that. Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/1604644960-48378-4-git-send-email-dong.menglong@zte.com.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09tunnels: Fix off-by-one in lower MTU bounds for ICMP/ICMPv6 repliesStefano Brivio
Jianlin reports that a bridged IPv6 VXLAN endpoint, carrying IPv6 packets over a link with a PMTU estimation of exactly 1350 bytes, won't trigger ICMPv6 Packet Too Big replies when the encapsulated datagrams exceed said PMTU value. VXLAN over IPv6 adds 70 bytes of overhead, so an ICMPv6 reply indicating 1280 bytes as inner MTU would be legitimate and expected. This comes from an off-by-one error I introduced in checks added as part of commit 4cb47a8644cc ("tunnels: PMTU discovery support for directly bridged IP packets"), whose purpose was to prevent sending ICMPv6 Packet Too Big messages with an MTU lower than the smallest permissible IPv6 link MTU, i.e. 1280 bytes. In iptunnel_pmtud_check_icmpv6(), avoid triggering a reply only if the advertised MTU would be less than, and not equal to, 1280 bytes. Also fix the analogous comparison for IPv4, that is, skip the ICMP reply only if the resulting MTU is strictly less than 576 bytes. This becomes apparent while running the net/pmtu.sh bridged VXLAN or GENEVE selftests with adjusted lower-link MTU values. Using e.g. GENEVE, setting ll_mtu to the values reported below, in the test_pmtu_ipvX_over_bridged_vxlanY_or_geneveY_exception() test function, we can see failures on the following tests: test | ll_mtu -------------------------------|-------- pmtu_ipv4_br_geneve4_exception | 626 pmtu_ipv6_br_geneve4_exception | 1330 pmtu_ipv6_br_geneve6_exception | 1350 owing to the different tunneling overheads implied by the corresponding configurations. Reported-by: Jianlin Shi <jishi@redhat.com> Fixes: 4cb47a8644cc ("tunnels: PMTU discovery support for directly bridged IP packets") Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Link: https://lore.kernel.org/r/4f5fc2f33bfdf8409549fafd4f952b008bf04d63.1604681709.git.sbrivio@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-09net: udp: introduce UDP_MIB_MEMERRORS for udp_memMenglong Dong
When udp_memory_allocated is at the limit, __udp_enqueue_schedule_skb will return a -ENOBUFS, and skb will be dropped in __udp_queue_rcv_skb without any counters being done. It's hard to find out what happened once this happen. So we introduce a UDP_MIB_MEMERRORS to do this job. Well, this change looks friendly to the existing users, such as netstat: $ netstat -u -s Udp: 0 packets received 639 packets to unknown port received. 158689 packet receive errors 180022 packets sent RcvbufErrors: 20930 MemErrors: 137759 UdpLite: IpExt: InOctets: 257426235 OutOctets: 257460598 InNoECTPkts: 181177 v2: - Fix some alignment problems Signed-off-by: Menglong Dong <dong.menglong@zte.com.cn> Link: https://lore.kernel.org/r/1604627354-43207-1-git-send-email-dong.menglong@zte.com.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-07net: ipv4: convert tasklets to use new tasklet_setup() APIAllen Pais
In preparation for unconditionally passing the struct tasklet_struct pointer to all tasklet callbacks, switch to using the new tasklet_setup() and from_tasklet() to pass the tasklet pointer explicitly. Signed-off-by: Romain Perier <romain.perier@gmail.com> Signed-off-by: Allen Pais <apais@linux.microsoft.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Remove in-kernel route notifications when nexthop changesIdo Schimmel
Remove in-kernel route notifications when the configuration of their nexthop changes. These notifications are unnecessary because the route still uses the same nexthop ID. A separate notification for the nexthop change itself is now sent in the nexthop notification chain. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Replay nexthops when registering a notifierIdo Schimmel
When registering a new notifier to the nexthop notification chain, replay all the existing nexthops to the new notifier so that it will have a complete picture of the available nexthops. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Pass extack to register_nexthop_notifier()Ido Schimmel
This will be used by the next patch which extends the function to replay all the existing nexthops to the notifier block being registered. Device drivers will be able to pass extack to the function since it is passed to them upon reload from devlink. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Emit a notification when a nexthop group is reducedIdo Schimmel
When a single nexthop is deleted, the configuration of all the groups using the nexthop is effectively modified. In this case, emit a notification in the nexthop notification chain for each modified group so that listeners would not need to keep track of which nexthops are member in which groups. In the rare cases where the notification fails, emit an error to the kernel log. This is done by allocating extack on the stack and printing the error logged by the listener that rejected the notification. Changes since RFC: * Allocate extack on the stack Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Emit a notification when a nexthop group is modifiedIdo Schimmel
When a single nexthop is replaced, the configuration of all the groups using the nexthop is effectively modified. In this case, emit a notification in the nexthop notification chain for each modified group so that listeners would not need to keep track of which nexthops are member in which groups. The notification can only be emitted after the new configuration (i.e., 'struct nh_info') is pointed at by the old shell (i.e., 'struct nexthop'). Before that the configuration of the nexthop groups is still the same as before the replacement. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Emit a notification when a single nexthop is replacedIdo Schimmel
The notification is emitted after all the validation checks were performed, but before the new configuration (i.e., 'struct nh_info') is pointed at by the old shell (i.e., 'struct nexthop'). This prevents the need to perform rollback in case the notification is vetoed. The next patch will also emit a replace notification for all the nexthop groups in which the nexthop is used. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Emit a notification when a nexthop group is replacedIdo Schimmel
Emit a notification in the nexthop notification chain when an existing nexthop group is replaced. The notification is emitted after all the validation checks were performed, but before the new configuration (i.e., 'struct nh_grp') is pointed at by the old shell (i.e., 'struct nexthop'). This prevents the need to perform rollback in case the notification is vetoed. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Emit a notification when a nexthop is addedIdo Schimmel
Emit a notification in the nexthop notification chain when a new nexthop is added (not replaced). The nexthop can either be a new group or a single nexthop. The notification is sent after the nexthop is inserted into the red-black tree, as listeners might need to callback into the nexthop code with the nexthop ID in order to mark the nexthop as offloaded. A 'REPLACE' notification is emitted instead of 'ADD' as the distinction between the two is not important for in-kernel listeners. In case the listener is not familiar with the encoded nexthop ID, it can simply treat it as a new one. This is also consistent with the route offload API. Changes since RFC: * Reword commit message Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Allow setting "offload" and "trap" indications on nexthopsIdo Schimmel
Add a function that can be called by device drivers to set "offload" or "trap" indication on nexthops following nexthop notifications. Changes since RFC: * s/nexthop_hw_flags_set/nexthop_set_hw_flags/ Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06rtnetlink: Add RTNH_F_TRAP flagIdo Schimmel
The flag indicates to user space that the nexthop is not programmed to forward packets in hardware, but rather to trap them to the CPU. This is needed, for example, when the MAC of the nexthop neighbour is not resolved and packets should reach the CPU to trigger neighbour resolution. The flag will be used in subsequent patches by netdevsim to test nexthop objects programming to device drivers and in the future by mlxsw as well. Changes since RFC: * Reword commit message Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: vxlan: Convert to new notification infoIdo Schimmel
Convert the sole listener of the nexthop notification chain (the VXLAN driver) to the new notification info. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Prepare new notification infoIdo Schimmel
Prepare the new notification information so that it could be passed to listeners in the new patch. Changes since RFC: * Add a blank line in __nh_notifier_single_info_init() Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-06nexthop: Pass extack to nexthop notifierIdo Schimmel
The next patch will add extack to the notification info. This allows listeners to veto notifications and communicate the reason to user space. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-04Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-nextJakub Kicinski
Pablo Neira Ayuso says: ==================== Netfilter updates for net-next 1) Move existing bridge packet reject infra to nf_reject_{ipv4,ipv6}.c from Jose M. Guisado. 2) Consolidate nft_reject_inet initialization and dump, also from Jose. 3) Add the netdev reject action, from Jose. 4) Allow to combine the exist flag and the destroy command in ipset, from Joszef Kadlecsik. 5) Expose bucket size parameter for hashtables, also from Jozsef. 6) Expose the init value for reproducible ipset listings, from Jozsef. 7) Use __printf attribute in nft_request_module, from Andrew Lunn. 8) Allow to use reject from the inet ingress chain. * git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next: netfilter: nft_reject_inet: allow to use reject from inet ingress netfilter: nftables: Add __printf() attribute netfilter: ipset: Expose the initval hash parameter to userspace netfilter: ipset: Add bucketsize parameter to all hash types netfilter: ipset: Support the -exist flag with the destroy command netfilter: nft_reject: add reject verdict support for netdev netfilter: nft_reject: unify reject init and dump into nft_reject netfilter: nf_reject: add reject skbuff creation helpers ==================== Link: https://lore.kernel.org/r/20201104141149.30082-1-pablo@netfilter.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-04tcp: propagate MPTCP skb extensions on xmit splitsPaolo Abeni
When the TCP stack splits a packet on the write queue, the tail half currently lose the associated skb extensions, and will not carry the DSM on the wire. The above does not cause functional problems and is allowed by the RFC, but interact badly with GRO and RX coalescing, as possible candidates for aggregation will carry different TCP options. This change tries to improve the MPTCP behavior, propagating the skb extensions on split. Additionally, we must prevent the MPTCP stack from updating the mapping after the split occur: that will both violate the RFC and fool the reader. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-04Merge branch 'master' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec Steffen Klassert says: ==================== 1) Fix packet receiving of standard IP tunnels when the xfrm_interface module is installed. From Xin Long. 2) Fix a race condition between spi allocating and hash list resizing. From zhuoliang zhang. ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02tcp: avoid slow start during fast recovery on new lossesYuchung Cheng
During TCP fast recovery, the congestion control in charge is by default the Proportional Rate Reduction (PRR) unless the congestion control module specified otherwise (e.g. BBR). Previously when tcp_packets_in_flight() is below snd_ssthresh PRR would slow start upon receiving an ACK that 1) cumulatively acknowledges retransmitted data and 2) does not detect further lost retransmission Such conditions indicate the repair is in good steady progress after the first round trip of recovery. Otherwise PRR adopts the packet conservation principle to send only the amount that was newly delivered (indicated by this ACK). This patch generalizes the previous design principle to include also the newly sent data beside retransmission: as long as the delivery is making good progress, both retransmission and new data should be accounted to make PRR more cautious in slow starting. Suggested-by: Matt Mathis <mattmathis@google.com> Suggested-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20201031013412.1973112-1-ycheng@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-01netfilter: nft_reject_inet: allow to use reject from inet ingressPablo Neira Ayuso
Enhance validation to support for reject from inet ingress chains. Note that, reject from inet ingress and netdev ingress differ. Reject packets from inet ingress are sent through ip_local_out() since inet reject emulates the IP layer receive path. So the reject packet follows to classic IP output and postrouting paths. The reject action from netdev ingress assumes the packet not yet entered the IP layer, so the reject packet is sent through dev_queue_xmit(). Therefore, reject packets from netdev ingress do not follow the classic IP output and postrouting paths. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-31Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nfJakub Kicinski
Pablo Neira Ayuso says: ==================== Netfilter fixes for net The following patchset contains Netfilter fixes for net: 1) Incorrect netlink report logic in flowtable and genID. 2) Add a selftest to check that wireguard passes the right sk to ip_route_me_harder, from Jason A. Donenfeld. 3) Pass the actual sk to ip_route_me_harder(), also from Jason. 4) Missing expression validation of updates via nft --check. 5) Update byte and packet counters regardless of whether they match, from Stefano Brivio. ==================== Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31ip_tunnel: fix over-mtu packet send fail without TUNNEL_DONT_FRAGMENT flagswenxu
The tunnel device such as vxlan, bareudp and geneve in the lwt mode set the outer df only based TUNNEL_DONT_FRAGMENT. And this was also the behavior for gre device before switching to use ip_md_tunnel_xmit in commit 962924fa2b7a ("ip_gre: Refactor collect metatdata mode tunnel xmit to ip_md_tunnel_xmit") When the ip_gre in lwt mode xmit with ip_md_tunnel_xmi changed the rule and make the discrepancy between handling of DF by different tunnels. So in the ip_md_tunnel_xmit should follow the same rule like other tunnels. Fixes: cfc7381b3002 ("ip_tunnel: add collect_md mode to IPIP tunnel") Signed-off-by: wenxu <wenxu@ucloud.cn> Link: https://lore.kernel.org/r/1604028728-31100-1-git-send-email-wenxu@ucloud.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31netfilter: nf_reject: add reject skbuff creation helpersJose M. Guisado Gomez
Adds reject skbuff creation helper functions to ipv4/6 nf_reject infrastructure. Use these functions for reject verdict in bridge family. Can be reused by all different families that support reject and will not inject the reject packet through ip local out. Signed-off-by: Jose M. Guisado Gomez <guigom@riseup.net> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-30udp: support sctp over udp in skb_udp_tunnel_segmentXin Long
For the gso of sctp over udp packets, sctp_gso_segment() will be called in skb_udp_tunnel_segment(), we need to set transport_header to sctp header. As all the current HWs can't handle both crc checksum and udp checksum at the same time, the crc checksum has to be done in sctp_gso_segment() by removing the NETIF_F_SCTP_CRC flag from the features. Meanwhile, if the HW can't do udp checksum, csum and csum_start has to be set correctly, and udp checksum will be done in __skb_udp_tunnel_segment() by calling gso_make_checksum(). Thanks to Paolo, Marcelo and Guillaume for helping with this one. v1->v2: - no change. v2->v3: - remove the he NETIF_F_SCTP_CRC flag from the features. - set csum and csum_start in sctp_gso_make_checksum(). Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30udp: check udp sock encap_type in __udp_lib_errXin Long
There is a chance that __udp4/6_lib_lookup() returns a udp encap sock in __udp_lib_err(), like the udp encap listening sock may use the same port as remote encap port, in which case it should go to __udp4/6_lib_err_encap() for more validation before processing the icmp packet. This patch is to check encap_type in __udp_lib_err() for the further validation for a encap sock. Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30net: ipv4: Fix some kerneldoc warnings in TCP Low PriorityAndrew Lunn
net//ipv4/tcp_lp.c:120: warning: Function parameter or member 'sk' not described in 'tcp_lp_cong_avoid' net//ipv4/tcp_lp.c:135: warning: Function parameter or member 'sk' not described in 'tcp_lp_remote_hz_estimator' net//ipv4/tcp_lp.c:188: warning: Function parameter or member 'sk' not described in 'tcp_lp_owd_calculator' net//ipv4/tcp_lp.c:222: warning: Function parameter or member 'rtt' not described in 'tcp_lp_rtt_sample' net//ipv4/tcp_lp.c:222: warning: Function parameter or member 'sk' not described in 'tcp_lp_rtt_sample' net//ipv4/tcp_lp.c:265: warning: Function parameter or member 'sk' not described in 'tcp_lp_pkts_acked' net//ipv4/tcp_lp.c:97: warning: Function parameter or member 'sk' not described in 'tcp_lp_init' There are still a few kerneldoc warnings after this fix. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201028012703.931632-1-andrew@lunn.ch Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30netfilter: use actual socket sk rather than skb sk when routing harderJason A. Donenfeld
If netfilter changes the packet mark when mangling, the packet is rerouted using the route_me_harder set of functions. Prior to this commit, there's one big difference between route_me_harder and the ordinary initial routing functions, described in the comment above __ip_queue_xmit(): /* Note: skb->sk can be different from sk, in case of tunnels */ int __ip_queue_xmit(struct sock *sk, struct sk_buff *skb, struct flowi *fl, That function goes on to correctly make use of sk->sk_bound_dev_if, rather than skb->sk->sk_bound_dev_if. And indeed the comment is true: a tunnel will receive a packet in ndo_start_xmit with an initial skb->sk. It will make some transformations to that packet, and then it will send the encapsulated packet out of a *new* socket. That new socket will basically always have a different sk_bound_dev_if (otherwise there'd be a routing loop). So for the purposes of routing the encapsulated packet, the routing information as it pertains to the socket should come from that socket's sk, rather than the packet's original skb->sk. For that reason __ip_queue_xmit() and related functions all do the right thing. One might argue that all tunnels should just call skb_orphan(skb) before transmitting the encapsulated packet into the new socket. But tunnels do *not* do this -- and this is wisely avoided in skb_scrub_packet() too -- because features like TSQ rely on skb->destructor() being called when that buffer space is truely available again. Calling skb_orphan(skb) too early would result in buffers filling up unnecessarily and accounting info being all wrong. Instead, additional routing must take into account the new sk, just as __ip_queue_xmit() notes. So, this commit addresses the problem by fishing the correct sk out of state->sk -- it's already set properly in the call to nf_hook() in __ip_local_out(), which receives the sk as part of its normal functionality. So we make sure to plumb state->sk through the various route_me_harder functions, and then make correct use of it following the example of __ip_queue_xmit(). Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2020-10-23tcp: Prevent low rmem stalls with SO_RCVLOWAT.Arjun Roy
With SO_RCVLOWAT, under memory pressure, it is possible to enter a state where: 1. We have not received enough bytes to satisfy SO_RCVLOWAT. 2. We have not entered buffer pressure (see tcp_rmem_pressure()). 3. But, we do not have enough buffer space to accept more packets. In this case, we advertise 0 rwnd (due to #3) but the application does not drain the receive queue (no wakeup because of #1 and #2) so the flow stalls. Modify the heuristic for SO_RCVLOWAT so that, if we are advertising rwnd<=rcv_mss, force a wakeup to prevent a stall. Without this patch, setting tcp_rmem to 6143 and disabling TCP autotune causes a stalled flow. With this patch, no stall occurs. This is with RPC-style traffic with large messages. Fixes: 03f45c883c6f ("tcp: avoid extra wakeups for SO_RCVLOWAT users") Signed-off-by: Arjun Roy <arjunroy@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20201023184709.217614-1-arjunroy.kdev@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-22tcp: fix to update snd_wl1 in bulk receiver fast pathNeal Cardwell
In the header prediction fast path for a bulk data receiver, if no data is newly acknowledged then we do not call tcp_ack() and do not call tcp_ack_update_window(). This means that a bulk receiver that receives large amounts of data can have the incoming sequence numbers wrap, so that the check in tcp_may_update_window fails: after(ack_seq, tp->snd_wl1) If the incoming receive windows are zero in this state, and then the connection that was a bulk data receiver later wants to send data, that connection can find itself persistently rejecting the window updates in incoming ACKs. This means the connection can persistently fail to discover that the receive window has opened, which in turn means that the connection is unable to send anything, and the connection's sending process can get permanently "stuck". The fix is to update snd_wl1 in the header prediction fast path for a bulk data receiver, so that it keeps up and does not see wrapping problems. This fix is based on a very nice and thorough analysis and diagnosis by Apollon Oikonomopoulos (see link below). This is a stable candidate but there is no Fixes tag here since the bug predates current git history. Just for fun: looks like the bug dates back to when header prediction was added in Linux v2.1.8 in Nov 1996. In that version tcp_rcv_established() was added, and the code only updates snd_wl1 in tcp_ack(), and in the new "Bulk data transfer: receiver" code path it does not call tcp_ack(). This fix seems to apply cleanly at least as far back as v3.2. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Neal Cardwell <ncardwell@google.com> Reported-by: Apollon Oikonomopoulos <apoikos@dmesg.gr> Tested-by: Apollon Oikonomopoulos <apoikos@dmesg.gr> Link: https://www.spinics.net/lists/netdev/msg692430.html Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20201022143331.1887495-1-ncardwell.kernel@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-19nexthop: Fix performance regression in nexthop deletionIdo Schimmel
While insertion of 16k nexthops all using the same netdev ('dummy10') takes less than a second, deletion takes about 130 seconds: # time -p ip -b nexthop.batch real 0.29 user 0.01 sys 0.15 # time -p ip link set dev dummy10 down real 131.03 user 0.06 sys 0.52 This is because of repeated calls to synchronize_rcu() whenever a nexthop is removed from a nexthop group: # /usr/share/bcc/tools/offcputime -p `pgrep -nx ip` -K ... b'finish_task_switch' b'schedule' b'schedule_timeout' b'wait_for_completion' b'__wait_rcu_gp' b'synchronize_rcu.part.0' b'synchronize_rcu' b'__remove_nexthop' b'remove_nexthop' b'nexthop_flush_dev' b'nh_netdev_event' b'raw_notifier_call_chain' b'call_netdevice_notifiers_info' b'__dev_notify_flags' b'dev_change_flags' b'do_setlink' b'__rtnl_newlink' b'rtnl_newlink' b'rtnetlink_rcv_msg' b'netlink_rcv_skb' b'rtnetlink_rcv' b'netlink_unicast' b'netlink_sendmsg' b'____sys_sendmsg' b'___sys_sendmsg' b'__sys_sendmsg' b'__x64_sys_sendmsg' b'do_syscall_64' b'entry_SYSCALL_64_after_hwframe' - ip (277) 126554955 Since nexthops are always deleted under RTNL, synchronize_net() can be used instead. It will call synchronize_rcu_expedited() which only blocks for several microseconds as opposed to multiple milliseconds like synchronize_rcu(). With this patch deletion of 16k nexthops takes less than a second: # time -p ip link set dev dummy10 down real 0.12 user 0.00 sys 0.04 Tested with fib_nexthops.sh which includes torture tests that prompted the initial change: # ./fib_nexthops.sh ... Tests passed: 134 Tests failed: 0 Fixes: 90f33bffa382 ("nexthops: don't modify published nexthop groups") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: David Ahern <dsahern@gmail.com> Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com> Link: https://lore.kernel.org/r/20201016172914.643282-1-idosch@idosch.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-16icmp: randomize the global rate limiterEric Dumazet
Keyu Man reported that the ICMP rate limiter could be used by attackers to get useful signal. Details will be provided in an upcoming academic publication. Our solution is to add some noise, so that the attackers no longer can get help from the predictable token bucket limiter. Fixes: 4cdf507d5452 ("icmp: add a global rate limitation") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Keyu Man <kman001@ucr.edu> Signed-off-by: Jakub Kicinski <kuba@kernel.org>