summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-10-15nfp: flower: fix multiple keys per pedit actionPieter Jansen van Vuuren
Previously we only allowed a single header key per pedit action to change the header. This used to result in the last header key in the pedit action to overwrite previous headers. We now keep track of them and allow multiple header keys per pedit action. Fixes: c0b1bd9a8b8a ("nfp: add set ipv4 header action flower offload") Fixes: 354b82bb320e ("nfp: add set ipv6 source and destination address") Fixes: f8b7b0a6b113 ("nfp: add set tcp and udp header action flower offload") Signed-off-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15nfp: flower: fix pedit set actions for multiple partial masksPieter Jansen van Vuuren
Previously we did not correctly change headers when using multiple pedit actions with partial masks. We now take this into account and no longer just commit the last pedit action. Fixes: c0b1bd9a8b8a ("nfp: add set ipv4 header action flower offload") Signed-off-by: Pieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15rxrpc: Fix a missing rxrpc_put_peer() in the error_report handlerDavid Howells
Fix a missing call to rxrpc_put_peer() on the main path through the rxrpc_error_report() function. This manifests itself as a ref leak whenever an ICMP packet or other error comes in. In commit f334430316e7, the hand-off of the ref to a work item was removed and was not replaced with a put. Fixes: f334430316e7 ("rxrpc: Fix error distribution") Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15net: phy: merge phy_start_aneg and phy_start_aneg_privHeiner Kallweit
After commit 9f2959b6b52d ("net: phy: improve handling delayed work") the sync parameter isn't needed any longer in phy_start_aneg_priv(). This allows to merge phy_start_aneg() and phy_start_aneg_priv(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15hv_netvsc: fix vf serial matching with pci slot infoHaiyang Zhang
The VF device's serial number is saved as a string in PCI slot's kobj name, not the slot->number. This patch corrects the netvsc driver, so the VF device can be successfully paired with synthetic NIC. Fixes: 00d7ddba1143 ("hv_netvsc: pair VF based on serial number") Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15Merge branch 'tcp-second-round-for-EDT-conversion'David S. Miller
Eric Dumazet says: ==================== tcp: second round for EDT conversion First round of EDT patches left TCP stack in a non optimal state. - High speed flows suffered from loss of performance, addressed by the first patch of this series. - Second patch brings pacing to the current state of networking, since we now reach ~100 Gbit on a single TCP flow. - Third patch implements a mitigation for scheduling delays, like the one we did in sch_fq in the past. - Fourth patch removes one special case in sch_fq for ACK packets. - Fifth patch removes a serious perfomance cost for TCP internal pacing. We should setup the high resolution timer only if really needed. - Sixth patch fixes a typo in BBR. - Last patch is one minor change in cdg congestion control. Neal Cardwell also has a patch series fixing BBR after EDT adoption. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15tcp: cdg: use tcp high resolution clock cacheEric Dumazet
We store in tcp socket a cache of most recent high resolution clock, there is no need to call local_clock() again, since this cache is good enough. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15tcp_bbr: fix typo in bbr_pacing_margin_percentNeal Cardwell
There was a typo in this parameter name. Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15tcp: optimize tcp internal pacingEric Dumazet
When TCP implements its own pacing (when no fq packet scheduler is used), it is arming high resolution timer after a packet is sent. But in many cases (like TCP_RR kind of workloads), this high resolution timer expires before the application attempts to write the following packet. This overhead also happens when the flow is ACK clocked and cwnd limited instead of being limited by the pacing rate. This leads to extra overhead (high number of IRQ) Now tcp_wstamp_ns is reserved for the pacing timer only (after commit "tcp: do not change tcp_wstamp_ns in tcp_mstamp_refresh"), we can setup the timer only when a packet is about to be sent, and if tcp_wstamp_ns is in the future. This leads to a ~10% performance increase in TCP_RR workloads. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15net_sched: sch_fq: no longer use skb_is_tcp_pure_ack()Eric Dumazet
With the new EDT model, sch_fq no longer has to special case TCP pure acks, since their skb->tstamp will allow them being sent without pacing delay. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15tcp: mitigate scheduling jitter in EDT pacing modelEric Dumazet
In commit fefa569a9d4b ("net_sched: sch_fq: account for schedule/timers drifts") we added a mitigation for scheduling jitter in fq packet scheduler. This patch does the same in TCP stack, now it is using EDT model. Note that this mitigation is valid for both external (fq packet scheduler) or internal TCP pacing. This uses the same strategy than the above commit, allowing a time credit of half the packet currently sent. Consider following case : An skb is sent, after an idle period of 300 usec. The air-time (skb->len/pacing_rate) is 500 usec Instead of setting the pacing timer to now+500 usec, it will use now+min(500/2, 300) -> now+250usec This is like having a token bucket with a depth of half an skb. Tested: tc qdisc replace dev eth0 root pfifo_fast Before netperf -P0 -H remote -- -q 1000000000 # 8000Mbit 540000 262144 262144 10.00 7710.43 After : netperf -P0 -H remote -- -q 1000000000 # 8000 Mbit 540000 262144 262144 10.00 7999.75 # Much closer to 8000Mbit target Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15net: extend sk_pacing_rate to unsigned longEric Dumazet
sk_pacing_rate has beed introduced as a u32 field in 2013, effectively limiting per flow pacing to 34Gbit. We believe it is time to allow TCP to pace high speed flows on 64bit hosts, as we now can reach 100Gbit on one TCP flow. This patch adds no cost for 32bit kernels. The tcpi_pacing_rate and tcpi_max_pacing_rate were already exported as 64bit, so iproute2/ss command require no changes. Unfortunately the SO_MAX_PACING_RATE socket option will stay 32bit and we will need to add a new option to let applications control high pacing rates. State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 1787144 10.246.9.76:49992 10.246.9.77:36741 timer:(on,003ms,0) ino:91863 sk:2 <-> skmem:(r0,rb540000,t66440,tb2363904,f605944,w1822984,o0,bl0,d0) ts sack bbr wscale:8,8 rto:201 rtt:0.057/0.006 mss:1448 rcvmss:536 advmss:1448 cwnd:138 ssthresh:178 bytes_acked:256699822585 segs_out:177279177 segs_in:3916318 data_segs_out:177279175 bbr:(bw:31276.8Mbps,mrtt:0,pacing_gain:1.25,cwnd_gain:2) send 28045.5Mbps lastrcv:73333 pacing_rate 38705.0Mbps delivery_rate 22997.6Mbps busy:73333ms unacked:135 retrans:0/157 rcv_space:14480 notsent:2085120 minrtt:0.013 Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15tcp: do not change tcp_wstamp_ns in tcp_mstamp_refreshEric Dumazet
In EDT design, I made the mistake of using tcp_wstamp_ns to store the last tcp_clock_ns() sample and to store the pacing virtual timer. This causes major regressions at high speed flows. Introduce tcp_clock_cache to store last tcp_clock_ns(). This is needed because some arches have slow high-resolution kernel time service. tcp_wstamp_ns is only updated when a packet is sent. Note that we can remove tcp_mstamp in the future since tcp_mstamp is essentially tcp_clock_cache/1000, so the apparent socket size increase is temporary. Fixes: 9799ccb0e984 ("tcp: add tcp_wstamp_ns socket field") Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15sctp: use the pmtu from the icmp packet to update transport pathmtuXin Long
Other than asoc pmtu sync from all transports, sctp_assoc_sync_pmtu is also processing transport pmtu_pending by icmp packets. But it's meaningless to use sctp_dst_mtu(t->dst) as new pmtu for a transport. The right pmtu value should come from the icmp packet, and it would be saved into transport->mtu_info in this patch and used later when the pmtu sync happens in sctp_sendmsg_to_asoc or sctp_packet_config. Besides, without this patch, as pmtu can only be updated correctly when receiving a icmp packet and no place is holding sock lock, it will take long time if the sock is busy with sending packets. Note that it doesn't process transport->mtu_info in .release_cb(), as there is no enough information for pmtu update, like for which asoc or transport. It is not worth traversing all asocs to check pmtu_pending. So unlike tcp, sctp does this in tx path, for which mtu_info needs to be atomic_t. Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15net: bridge: fix a possible memory leak in __vlan_addLi RongQing
After per-port vlan stats, vlan stats should be released when fail to add vlan Fixes: 9163a0fc1f0c0 ("net: bridge: add support for per-port vlan stats") CC: bridge@lists.linux-foundation.org cc: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> CC: Roopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: Zhang Yu <zhangyu31@baidu.com> Signed-off-by: Li RongQing <lirongqing@baidu.com> Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15rxrpc: Add /proc/net/rxrpc/peers to display peer listDavid Howells
Add /proc/net/rxrpc/peers to display the list of peers currently active. Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15net: fec: don't dump RX FIFO register when not availableFugang Duan
Commit db65f35f50e0 ("net: fec: add support of ethtool get_regs") introduce ethool "--register-dump" interface to dump all FEC registers. But not all silicon implementations of the Freescale FEC hardware module have the FRBR (FIFO Receive Bound Register) and FRSR (FIFO Receive Start Register) register, so we should not be trying to dump them on those that don't. To fix it we create a quirk flag, FEC_QUIRK_HAS_RFREG, and check it before dump those RX FIFO registers. Signed-off-by: Fugang Duan <fugang.duan@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15fore200e: fix missing unlock on error in bsq_audit()Wei Yongjun
Add the missing unlock before return from function bsq_audit() in the error handling case. Fixes: 1d9d8be91788 ("fore200e: check for dma mapping failures") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15Merge branch 'bnxt_en-Add-support-for-new-57500-chips'David S. Miller
Michael Chan says: ==================== bnxt_en: Add support for new 57500 chips. This patch-set is larger than normal because I wanted a complete series to add basic support for the new 57500 chips. The new chips have the following main differences compared to legacy chips: 1. Requires the PF driver to allocate DMA context memory as a backing store. 2. New NQ (notification queue) for interrupt events. 3. One or more CP rings can be associated with an NQ. 4. 64-bit doorbells. Most other structures and firmware APIs are compatible with legacy devices with some exceptions. For example, ring groups are no longer used and RSS table format has changed. The patch-set includes the usual firmware spec. update, some refactoring and restructuring, and adding the new code to add basic support for the new class of devices. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Add PCI ID for BCM57508 device.Michael Chan
Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Add new NAPI poll function for 57500 chips.Michael Chan
Add a new poll function that polls for NQ events. If the NQ event is a CQ notification, we locate the CP ring from the cq_handle and call __bnxt_poll_work() to handle RX/TX events on the CP ring. Add a new has_more_work field in struct bnxt_cp_ring_info to indicate budget has been reached. __bnxt_poll_cqs_done() is called to update or ARM the CP rings if budget has not been reached or not. If budget has been reached, the next bnxt_poll_p5() call will continue to poll from the CQ rings directly. Otherwise, the NQ will be ARMed for the next IRQ. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Refactor bnxt_poll_work().Michael Chan
Separate the CP ring polling logic in bnxt_poll_work() into 2 separate functions __bnxt_poll_work() and __bnxt_poll_work_done(). Since the logic is separated, we need to add tx_pkts and events fields to struct bnxt_napi to keep track of the events to handle between the 2 functions. We also add had_work_done field to struct bnxt_cp_ring_info to indicate whether some work was performed on the CP ring. This is needed to better support the 57500 chips. We need to poll up to 2 separate CP rings before we update or ARM the CP rings on the 57500 chips. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Add coalescing setup for 57500 chips.Michael Chan
On legacy chips, the CP ring may be shared between RX and TX and so only setup the RX coalescing parameters in such a case. On 57500 chips, we always have a dedicated CP ring for TX so we can always set up the TX coalescing parameters in bnxt_hwrm_set_coal(). Also, the min_timer coalescing parameter applies to the NQ on the new chips and a separate firmware call needs to be made to set it up. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Use bnxt_cp_ring_info struct pointer as parameter for RX path.Michael Chan
In the RX code path, we current use the bnxt_napi struct pointer to identify the associated RX/CP rings. Change it to use the struct bnxt_cp_ring_info pointer instead since there are now up to 2 CP rings per MSIX. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Add RSS support for 57500 chips.Michael Chan
RSS context allocation and RSS indirection table setup are very different on the new chip. Refactor bnxt_setup_vnic() to call 2 different functions to set up RSS for the vnic based on chip type. On the new chip, the number of RSS contexts and the indirection table size depends on the number of RX rings. Each indirection table entry is also different on the new chip since ring groups are no longer used. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Increase RSS context array count and skip ring groups on 57500 chips.Michael Chan
On the new 57500 chips, we need to allocate one RSS context for every 64 RX rings. In previous chips, only one RSS context per vnic is required regardless of the number of RX rings. So increase the max RSS context array count to 8. Hardware ring groups are not used on the new chips. Note that the software ring group structure is still maintained in the driver to keep track of the rings associated with the vnic. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Allocate/Free CP rings for 57500 series chips.Michael Chan
On the new 57500 chips, we allocate/free one CP ring for each RX ring or TX ring separately. Using separate CP rings for RX/TX is an improvement as TX events will no longer be stuck behind RX events. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Modify bnxt_ring_alloc_send_msg() to support 57500 chips.Michael Chan
Firmware ring allocation semantics are slightly different for most ring types on 57500 chips. Allocation/deallocation for NQ rings are also added for the new chips. A CP ring handle is also added so that from the NQ interrupt event, we can locate the CP ring. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Add helper functions to get firmware CP ring ID.Michael Chan
On the new 57500 chips, getting the associated CP ring ID associated with an RX ring or TX ring is different than before. On the legacy chips, we find the associated ring group and look up the CP ring ID. On the 57500 chips, each RX ring and TX ring has a dedicated CP ring even if they share the MSIX. Use these helper functions at appropriate places to get the CP ring ID. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Allocate completion ring structures for 57500 series chips.Michael Chan
On 57500 chips, the original bnxt_cp_ring_info struct now refers to the NQ. bp->cp_nr_rings refer to the number of NQs on 57500 chips. There are now 2 pointers for the CP rings associated with RX and TX rings. Modify bnxt_alloc_cp_rings() and bnxt_free_cp_rings() accordingly. With multiple CP rings per NAPI, we need to add a pointer in bnxt_cp_ring_info struct to point back to the bnxt_napi struct. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Modify the ring reservation functions for 57500 series chips.Michael Chan
The ring reservation functions have to be modified for P5 chips in the following ways: - bnxt_cp_ring_info structs map to internal NQs as well as CP rings. - Ring groups are not used. - 1 CP ring must be available for each RX or TX ring. - number of RSS contexts to reserve is multiples of 64 RX rings. - RFS currently not supported. Also, RX AGG rings are only used for jumbo frames, so we need to unconditionally call bnxt_reserve_rings() in __bnxt_open_nic() to see if we need to reserve AGG rings in case MTU has changed. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Adjust MSIX and ring groups for 57500 series chips.Michael Chan
Store the maximum MSIX capability in PCIe config. space earlier. When we call firmware to query capability, we need to compare the PCIe MSIX max count with the firmware count and use the smaller one as the MSIX count for 57500 (P5) chips. The new chips don't use ring groups. But previous chips do and the existing logic limits the available rings based on resource calculations including ring groups. Setting the max ring groups to the max rx rings will work on the new chips without changing the existing logic. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Re-structure doorbells.Michael Chan
The 57500 series chips have a new 64-bit doorbell format. Use a new bnxt_db_info structure to unify the new and the old 32-bit doorbells. Add a new bnxt_set_db() function to set up the doorbell addreses and doorbell keys ahead of time. Modify and introduce new doorbell helpers to help abstract and unify the old and new doorbells. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Add 57500 new chip ID and basic structures.Michael Chan
57500 series is a new chip class (P5) that requires some driver changes in the next several patches. This adds basic chip ID, doorbells, and the notification queue (NQ) structures. Each MSIX is associated with an NQ instead of a CP ring in legacy chips. Each NQ has up to 2 associated CP rings for RX and TX. The same bnxt_cp_ring_info struct will be used for the NQ. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Configure context memory on new devices.Michael Chan
Call firmware to configure the DMA addresses of all context memory pages on new devices requiring context memory. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Check context memory requirements from firmware.Michael Chan
New device requires host context memory as a backing store. Call firmware to check for context memory requirements and store the parameters. Allocate host pages accordingly. We also need to move the call bnxt_hwrm_queue_qportcfg() earlier so that all the supported hardware queues and the IDs are known before checking and allocating context memory. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Add new flags to setup new page table PTE bits on newer devices.Michael Chan
Newer chips require the PTU_PTE_VALID bit to be set for every page table entry for context memory and rings. Additional bits are also required for page table entries for all rings. Add a flags field to bnxt_ring_mem_info struct to specify these additional bits to be used when setting up the pages tables as needed. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Refactor bnxt_ring_struct.Michael Chan
Move the DMA page table and vmem fields in bnxt_ring_struct to a new bnxt_ring_mem_info struct. This will allow context memory management for a new device to re-use some of the existing infrastructure. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Update interrupt coalescing logic.Michael Chan
New firmware spec. allows interrupt coalescing parameters, such as maximums, timer units, supported features to be queried. Update the driver to make use of the new call to query these parameters and provide the legacy defaults if the call is not available. Replace the hard-coded values with these parameters. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Add maximum extended request length fw message support.Michael Chan
Support the max_ext_req_len field from the HWRM_VER_GET_RESPONSE. If this field is valid and greater than the mailbox size, use the short command format to send firmware messages greater than the mailbox size. Newer devices use this method to send larger messages to the firmware. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Add additional extended port statistics.Michael Chan
Latest firmware spec. has some additional rx extended port stats and new tx extended port stats added. We now need to check the size of the returned rx and tx extended stats and determine how many counters are valid. New counters added include CoS byte and packet counts for rx and tx. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15bnxt_en: Update firmware interface spec. to 1.10.0.3.Michael Chan
Among the new changes are trusted VF support, 200Gbps support, and new API to dump ring information on the new chips. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15qed: fix spelling mistake "Ireelevant" -> "Irrelevant"Colin Ian King
Trivial fix to spelling mistake in DP_INFO message Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15ipv6: mcast: fix a use-after-free in inet6_mc_checkEric Dumazet
syzbot found a use-after-free in inet6_mc_check [1] The problem here is that inet6_mc_check() uses rcu and read_lock(&iml->sflock) So the fact that ip6_mc_leave_src() is called under RTNL and the socket lock does not help us, we need to acquire iml->sflock in write mode. In the future, we should convert all this stuff to RCU. [1] BUG: KASAN: use-after-free in ipv6_addr_equal include/net/ipv6.h:521 [inline] BUG: KASAN: use-after-free in inet6_mc_check+0xae7/0xb40 net/ipv6/mcast.c:649 Read of size 8 at addr ffff8801ce7f2510 by task syz-executor0/22432 CPU: 1 PID: 22432 Comm: syz-executor0 Not tainted 4.19.0-rc7+ #280 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x1c4/0x2b4 lib/dump_stack.c:113 print_address_description.cold.8+0x9/0x1ff mm/kasan/report.c:256 kasan_report_error mm/kasan/report.c:354 [inline] kasan_report.cold.9+0x242/0x309 mm/kasan/report.c:412 __asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:433 ipv6_addr_equal include/net/ipv6.h:521 [inline] inet6_mc_check+0xae7/0xb40 net/ipv6/mcast.c:649 __raw_v6_lookup+0x320/0x3f0 net/ipv6/raw.c:98 ipv6_raw_deliver net/ipv6/raw.c:183 [inline] raw6_local_deliver+0x3d3/0xcb0 net/ipv6/raw.c:240 ip6_input_finish+0x467/0x1aa0 net/ipv6/ip6_input.c:345 NF_HOOK include/linux/netfilter.h:289 [inline] ip6_input+0xe9/0x600 net/ipv6/ip6_input.c:426 ip6_mc_input+0x48a/0xd20 net/ipv6/ip6_input.c:503 dst_input include/net/dst.h:450 [inline] ip6_rcv_finish+0x17a/0x330 net/ipv6/ip6_input.c:76 NF_HOOK include/linux/netfilter.h:289 [inline] ipv6_rcv+0x120/0x640 net/ipv6/ip6_input.c:271 __netif_receive_skb_one_core+0x14d/0x200 net/core/dev.c:4913 __netif_receive_skb+0x2c/0x1e0 net/core/dev.c:5023 netif_receive_skb_internal+0x12c/0x620 net/core/dev.c:5126 napi_frags_finish net/core/dev.c:5664 [inline] napi_gro_frags+0x75a/0xc90 net/core/dev.c:5737 tun_get_user+0x3189/0x4250 drivers/net/tun.c:1923 tun_chr_write_iter+0xb9/0x154 drivers/net/tun.c:1968 call_write_iter include/linux/fs.h:1808 [inline] do_iter_readv_writev+0x8b0/0xa80 fs/read_write.c:680 do_iter_write+0x185/0x5f0 fs/read_write.c:959 vfs_writev+0x1f1/0x360 fs/read_write.c:1004 do_writev+0x11a/0x310 fs/read_write.c:1039 __do_sys_writev fs/read_write.c:1112 [inline] __se_sys_writev fs/read_write.c:1109 [inline] __x64_sys_writev+0x75/0xb0 fs/read_write.c:1109 do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x457421 Code: 75 14 b8 14 00 00 00 0f 05 48 3d 01 f0 ff ff 0f 83 34 b5 fb ff c3 48 83 ec 08 e8 1a 2d 00 00 48 89 04 24 b8 14 00 00 00 0f 05 <48> 8b 3c 24 48 89 c2 e8 63 2d 00 00 48 89 d0 48 83 c4 08 48 3d 01 RSP: 002b:00007f2d30ecaba0 EFLAGS: 00000293 ORIG_RAX: 0000000000000014 RAX: ffffffffffffffda RBX: 000000000000003e RCX: 0000000000457421 RDX: 0000000000000001 RSI: 00007f2d30ecabf0 RDI: 00000000000000f0 RBP: 0000000020000500 R08: 00000000000000f0 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000293 R12: 00007f2d30ecb6d4 R13: 00000000004c4890 R14: 00000000004d7b90 R15: 00000000ffffffff Allocated by task 22437: save_stack+0x43/0xd0 mm/kasan/kasan.c:448 set_track mm/kasan/kasan.c:460 [inline] kasan_kmalloc+0xc7/0xe0 mm/kasan/kasan.c:553 __do_kmalloc mm/slab.c:3718 [inline] __kmalloc+0x14e/0x760 mm/slab.c:3727 kmalloc include/linux/slab.h:518 [inline] sock_kmalloc+0x15a/0x1f0 net/core/sock.c:1983 ip6_mc_source+0x14dd/0x1960 net/ipv6/mcast.c:427 do_ipv6_setsockopt.isra.9+0x3afb/0x45d0 net/ipv6/ipv6_sockglue.c:743 ipv6_setsockopt+0xbd/0x170 net/ipv6/ipv6_sockglue.c:933 rawv6_setsockopt+0x59/0x140 net/ipv6/raw.c:1069 sock_common_setsockopt+0x9a/0xe0 net/core/sock.c:3038 __sys_setsockopt+0x1ba/0x3c0 net/socket.c:1902 __do_sys_setsockopt net/socket.c:1913 [inline] __se_sys_setsockopt net/socket.c:1910 [inline] __x64_sys_setsockopt+0xbe/0x150 net/socket.c:1910 do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe Freed by task 22430: save_stack+0x43/0xd0 mm/kasan/kasan.c:448 set_track mm/kasan/kasan.c:460 [inline] __kasan_slab_free+0x102/0x150 mm/kasan/kasan.c:521 kasan_slab_free+0xe/0x10 mm/kasan/kasan.c:528 __cache_free mm/slab.c:3498 [inline] kfree+0xcf/0x230 mm/slab.c:3813 __sock_kfree_s net/core/sock.c:2004 [inline] sock_kfree_s+0x29/0x60 net/core/sock.c:2010 ip6_mc_leave_src+0x11a/0x1d0 net/ipv6/mcast.c:2448 __ipv6_sock_mc_close+0x20b/0x4e0 net/ipv6/mcast.c:310 ipv6_sock_mc_close+0x158/0x1d0 net/ipv6/mcast.c:328 inet6_release+0x40/0x70 net/ipv6/af_inet6.c:452 __sock_release+0xd7/0x250 net/socket.c:579 sock_close+0x19/0x20 net/socket.c:1141 __fput+0x385/0xa30 fs/file_table.c:278 ____fput+0x15/0x20 fs/file_table.c:309 task_work_run+0x1e8/0x2a0 kernel/task_work.c:113 tracehook_notify_resume include/linux/tracehook.h:193 [inline] exit_to_usermode_loop+0x318/0x380 arch/x86/entry/common.c:166 prepare_exit_to_usermode arch/x86/entry/common.c:197 [inline] syscall_return_slowpath arch/x86/entry/common.c:268 [inline] do_syscall_64+0x6be/0x820 arch/x86/entry/common.c:293 entry_SYSCALL_64_after_hwframe+0x49/0xbe The buggy address belongs to the object at ffff8801ce7f2500 which belongs to the cache kmalloc-192 of size 192 The buggy address is located 16 bytes inside of 192-byte region [ffff8801ce7f2500, ffff8801ce7f25c0) The buggy address belongs to the page: page:ffffea000739fc80 count:1 mapcount:0 mapping:ffff8801da800040 index:0x0 flags: 0x2fffc0000000100(slab) raw: 02fffc0000000100 ffffea0006f6e548 ffffea000737b948 ffff8801da800040 raw: 0000000000000000 ffff8801ce7f2000 0000000100000010 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8801ce7f2400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8801ce7f2480: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc >ffff8801ce7f2500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8801ce7f2580: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff8801ce7f2600: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15Merge branch 'selftests-pmtu-Add-test-choice-and-captures'David S. Miller
Stefano Brivio says: ==================== selftests: pmtu: Add test choice and captures This series adds a couple of features useful for debugging: 1/2 allows selecting single tests and 2/2 adds optional traffic captures. Semantics for current invocation of test script are preserved. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15selftests: pmtu: Add optional traffic captures for single testsStefano Brivio
If --trace is passed as an option and tcpdump is available, capture traffic for all relevant interfaces to per-test pcap files named <test>_<interface>.pcap. Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15selftests: pmtu: Allow selection of single testsStefano Brivio
As number of tests is growing, it's quite convenient to allow single tests to be run. Display usage when the script is run with any invalid argument, keep existing semantics when no arguments are passed so that automated runs won't break. Instead of just looping on the list of requested tests, if any, check first that they exist, and go through them in a nested loop to keep the existing way to display test descriptions. Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15r8169: remove unneeded call to netif_stop_queue in rtl8169_net_suspendHeiner Kallweit
netif_device_detach() stops all tx queues already, so we don't need this call. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15r8169: simplify rtl8169_set_magic_regHeiner Kallweit
Simplify this function, no functional change intended. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-15tipc: fix unsafe rcu locking when accessing publication listTung Nguyen
The binding table's 'cluster_scope' list is rcu protected to handle races between threads changing the list and those traversing the list at the same moment. We have now found that the function named_distribute() uses the regular list_for_each() macro to traverse the said list. Likewise, the function tipc_named_withdraw() is removing items from the same list using the regular list_del() call. When these two functions execute in parallel we see occasional crashes. This commit fixes this by adding the missing _rcu() suffixes. Signed-off-by: Tung Nguyen <tung.q.nguyen@dektech.com.au> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>