summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2022-02-09net: usb: smsc95xx: add generic selftest supportOleksij Rempel
Provide generic selftest support. Tested with LAN9500 and LAN9512. Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09net: ethernet: cavium: use div64_u64() instead of do_div()Wang Qing
do_div() does a 64-by-32 division. When the divisor is u64, do_div() truncates it to 32 bits, this means it can test non-zero and be truncated to zero for division. fix do_div.cocci warning: do_div() does a 64-by-32 division, please consider using div64_u64 instead. Signed-off-by: Wang Qing <wangqing@vivo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09net:enetc: enetc qos using the CBDR dma alloc functionPo Liu
Now we can use the enetc_cbd_alloc_data_mem() to replace complicated DMA data alloc method and CBDR memory basic seting. Signed-off-by: Po Liu <po.liu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09net:enetc: command BD ring data memory alloc as one function alonePo Liu
Separate the CBDR data memory alloc standalone. It is convenient for other part loading, for example the ENETC QOS part. Reported-and-suggested-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Po Liu <po.liu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09net:enetc: allocate CBD ring data memory using DMA coherent methodsPo Liu
To replace the dma_map_single() stream DMA mapping with DMA coherent method dma_alloc_coherent() which is more simple. dma_map_single() found by Tim Gardner not proper. Suggested by Claudiu Manoil and Jakub Kicinski to use dma_alloc_coherent(). Discussion at: https://lore.kernel.org/netdev/AM9PR04MB8397F300DECD3C44D2EBD07796BD9@AM9PR04MB8397.eurprd04.prod.outlook.com/t/ Fixes: 888ae5a3952ba ("net: enetc: add tc flower psfp offload driver") cc: Claudiu Manoil <claudiu.manoil@nxp.com> Reported-by: Tim Gardner <tim.gardner@canonical.com> Signed-off-by: Po Liu <po.liu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09soc: fsl: dpio: read the consumer index from the cache inhibited areaIoana Ciornei
Once we added support in the dpaa2-eth for driver level software TSO we observed the following situation: if the EQCR CI (consumer index) is read from the cache-enabled area we sometimes end up with a computed value of available enqueue entries bigger than the size of the ring. This eventually will lead to the multiple enqueue of the same FD which will determine the same FD to end up on the Tx confirmation path and the same skb being freed twice. Just read the consumer index from the cache inhibited area so that we avoid this situation. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09dpaa2-eth: add support for software TSOIoana Ciornei
This patch adds support for driver level TSO in the enetc driver using the TSO API. There is not much to say about this specific implementation. We are using the usual tso_build_hdr(), tso_build_data() to create each data segment, we create an array of S/G FDs where the first S/G entry is referencing the header data and the remaining ones the data portion. For the S/G Table buffer we use the same cache of buffers used on the other non-GSO cases - dpaa2_eth_sgt_get() and dpaa2_eth_sgt_recycle(). We cannot keep a DMA coherent buffer for all the TSO headers because the DPAA2 architecture does not work in a ring based fashion so we just allocate a buffer each time. Even with these limitations we get the following improvement in TCP termination on the LX2160A SoC, on a single A72 core running at 2.2GHz. before: 6.38Gbit/s after: 8.48Gbit/s Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09dpaa2-eth: work with an array of FDsIoana Ciornei
Up until now, the __dpaa2_eth_tx function used a single FD on the stack to construct the structure to be enqueued. Since we are now preparing the ground work to add support for TSO done in software at the driver level, the same function needs to work with an array of FDs and enqueue as many as the build_*_fd functions create. Make the necessary adjustments in order to do this. These include: keeping an array of FDs in a percpu structure, cleaning up the necessary FDs before populating it and then, retrying the enqueue process up till all the generated FDs were enqueued or until we reach the maximum number retries. This patch does not change the fact that only a single FD will result from a __dpaa2_eth_tx call but rather just creates the necessary changes for the next patch. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09dpaa2-eth: use the S/G table cache also for the normal S/G pathIoana Ciornei
Instead of allocating memory for an S/G table each time a nonlinear skb is processed, and then freeing it on the Tx confirmation path, use the S/G table cache in order to reuse the memory. For this to work we have to change the size of the cached buffers so that it can hold the maximum number of scatterlist entries. Other than that, each allocate/free call is replaced by a call to the dpaa2_eth_sgt_get/dpaa2_eth_sgt_recycle functions, introduced in the previous patch. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09dpaa2-eth: extract the S/G table buffer cache interaction into functionsIoana Ciornei
The dpaa2-eth driver uses in certain circumstances a buffer cache for the S/G tables needed in case of a S/G FD. At the moment, the interraction with the cache is open-coded and couldn't be reused easily. Add two new functions - dpaa2_eth_sgt_get and dpaa2_eth_sgt_recycle - which help with code reusability. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09dpaa2-eth: allocate a fragment already alignedIoana Ciornei
Instead of allocating memory and then manually aligning it to the desired value use napi_alloc_frag_align() directly to streamline the process. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09dpaa2-eth: rearrange variable declaration in __dpaa2_eth_txIoana Ciornei
In the next patches we'll be moving things arroung in the mentioned function and also add some new variable declarations. Before all this, cleanup the variable declaration order. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09octeontx2-pf: PFC config support with DCBxHariprasad Kelam
Data centric bridging designed to eliminate packet loss due to queue overflow by adding enhancements to ethernet network such as proprity flow control etc. This patch adds support for management of Priority flow control(PFC) on Octeontx2 and CN10K interfaces. To enable PFC for all priorities dcb pfc set dev eth0 prio-pfc all:on/off To enable PFC on selected priorites dcb pfc set dev eth0 prio-pfc 0:on/off 1:on/off ..7:on/off With the ntuple commands user can map Priority to receive queues. On queue overflow NIX will assert backpressure such that PFC pause frames are genarated with mapped priority. To map priority 7 to Queue 1 ethtool -U eth0 flow-type ether dst xx:xx:xx:xx:xx:xx vlan 0xe00a m 0x1fff queue 1 Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09octeontx2-af: Flow control resource managementHariprasad Kelam
CN10K MAC block (RPM) and Octeontx2 MAC block (CGX) both supports PFC flow control and 802.3X flow control pause frames. Each MAC block supports max 4 LMACS and AF driver assigns same (MAC,LMAC) to PF and its VFs. As PF and its share same (MAC,LMAC) pair we need resource management to address below scenarios 1. Maintain PFC and 8023X pause frames mutually exclusive. 2. Reject disable flow control request if other PF or Vfs enabled it. Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09octeontx2-af: Priority flow control configuration supportSunil Kumar Kori
Prirority based flow control (802.1Qbb) mechanism is similar to ethernet pause frames (802.3x) instead pausing all traffic on a link, PFC allows user to selectively pause traffic according to its class. Oceteontx2 MAC block (CGX) and CN10K Mac block (RPM) both supports PFC. As upper layer mbox handler is same for both the MACs, this patch configures PFC by calling apporopritate callbacks. Signed-off-by: Sunil Kumar Kori <skori@marvell.com> Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09octeontx2-af: Don't enable Pause frames by defaultHariprasad Kelam
Current implementation is such that 802.3x pause frames are enabled by default. As CGX and RPM blocks support PFC (priority flow control) also, instead of driver enabling one between them enable them upon request from PF or its VFs. Also add support to disable pause frames in driver unbind. Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/nextDavid S. Miller
-queue Tony Nguyen says: ==================== 40GbE Intel Wired LAN Driver Updates 2022-02-08 Joe Damato says: This patch set makes several updates to the i40e driver stats collection and reporting code to help users of i40e get a better sense of how the driver is performing and interacting with the rest of the kernel. These patches include some new stats (like waived and busy) which were inspired by other drivers that track stats using the same nomenclature. The new stats and an existing stat, rx_reuse, are now accessible with ethtool to make harvesting this data more convenient for users. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09Netvsc: Call hv_unmap_memory() in the netvsc_device_remove()Tianyu Lan
netvsc_device_remove() calls vunmap() inside which should not be called in the interrupt context. Current code calls hv_unmap_memory() in the free_netvsc_device() which is rcu callback and maybe called in the interrupt context. This will trigger BUG_ON(in_interrupt()) in the vunmap(). Fix it via moving hv_unmap_memory() to netvsc_device_ remove(). Fixes: 846da38de0e8 ("net: netvsc: Add Isolation VM support for netvsc driver") Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-09Merge branch '1GbE' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== 1GbE Intel Wired LAN Driver Updates 2022-02-07 Corinna Vinschen says: Fix the kernel warning "Missing unregister, handled but fix driver" when running, e.g., $ ethtool -G eth0 rx 1024 on igc. Remove memset hack from igb and align igb code to igc. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-08ptp_pch: Remove unused pch_pm_opsAndy Shevchenko
The default values for hooks in the driver.pm are NULLs. Hence drop unused pch_pm_ops. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220207210730.75252-6-andriy.shevchenko@linux.intel.com Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-08ptp_pch: Convert to use managed functions pcim_* and devm_*Andy Shevchenko
This makes the error handling much more simpler than open-coding everything and in addition makes the probe function smaller an tidier. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220207210730.75252-5-andriy.shevchenko@linux.intel.com Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-08ptp_pch: Switch to use module_pci_driver() macroAndy Shevchenko
Eliminate some boilerplate code by using module_pci_driver() instead of init/exit, and, if needed, moving the salient bits from init into probe. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220207210730.75252-4-andriy.shevchenko@linux.intel.com Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-08ptp_pch: Use ioread64_hi_lo() / iowrite64_hi_lo()Andy Shevchenko
There is already helper functions to do 64-bit I/O on 32-bit machines or buses, thus we don't need to reinvent the wheel. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220207210730.75252-3-andriy.shevchenko@linux.intel.com Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-08ptp_pch: Use ioread64_lo_hi() / iowrite64_lo_hi()Andy Shevchenko
There is already helper functions to do 64-bit I/O on 32-bit machines or buses, thus we don't need to reinvent the wheel. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20220207210730.75252-2-andriy.shevchenko@linux.intel.com Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-08ptp_pch: use mac_pton()Andy Shevchenko
Use mac_pton() instead of custom approach. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Link: https://lore.kernel.org/r/20220207210730.75252-1-andriy.shevchenko@linux.intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-08bonding: switch bond_net_exit() to batch modeEric Dumazet
cleanup_net() is competing with other rtnl users. Batching bond_net_exit() factorizes all rtnl acquistions to a single one, giving chance for cleanup_net() to progress much faster, holding rtnl a bit longer. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Jay Vosburgh <j.vosburgh@gmail.com> Cc: Veaceslav Falico <vfalico@gmail.com> Cc: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-08et131x: support arbitrary MAX_SKB_FRAGSEric Dumazet
This NIC does not support TSO, it is very unlikely it would have to send packets with many fragments. Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20220208004855.1887345-1-eric.dumazet@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-08Merge branch 'iwl-next' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux Nguyen, Anthony L says: ==================== iwl-next Intel Wired LAN Driver Updates 2022-02-07 Dave adds support for ice driver to provide DSCP QoS mappings to irdma driver. [1] https://lore.kernel.org/netdev/20220202191921.1638-1-shiraz.saleem@intel.com/ * 'iwl-next' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux: ice: add support for DSCP QoS for IDC ==================== Link: https://lore.kernel.org/r/20220207235921.1303522-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-08i40e: Add a stat for tracking busy rx pagesJoe Damato
In some cases, pages cannot be reused by i40e because the page is busy. Add a counter for this event. Busy page count is accessible via ethtool. Signed-off-by: Joe Damato <jdamato@fastly.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-08i40e: Add a stat for tracking pages waivedJoe Damato
In some cases, pages can not be reused because they are not associated with the correct NUMA zone. Knowing how often pages are waived helps users to understand the interaction between the driver's memory usage and their system. Pass rx_stats through to i40e_can_reuse_rx_page to allow tracking when pages are waived. The page waive count is accessible via ethtool. Signed-off-by: Joe Damato <jdamato@fastly.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-08i40e: Add a stat tracking new RX page allocationsJoe Damato
Add a counter for new page allocations in the i40e RX path. This stat is accessible with ethtool. Signed-off-by: Joe Damato <jdamato@fastly.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-08i40e: Aggregate and export RX page reuse statJoe Damato
rx page reuse was already being tracked by the i40e driver per RX ring. Aggregate the counts and make them accessible via ethtool. Signed-off-by: Joe Damato <jdamato@fastly.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-08i40e: Remove rx page reuse double countJoe Damato
Page reuse was being tracked from two locations: - i40e_reuse_rx_page (via 40e_clean_rx_irq), and - i40e_alloc_mapped_page Remove the double count and only count reuse from i40e_alloc_mapped_page when the page is about to be reused. Signed-off-by: Joe Damato <jdamato@fastly.com> Tested-by: Dave Switzer <david.switzer@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-07net: stmmac: optimize locking around PTP clock readsYannick Vignon
Reading the PTP clock is a simple operation requiring only 3 register reads. Under a PREEMPT_RT kernel, protecting those reads by a spin_lock is counter-productive: if the 2nd task preempting the 1st has a higher prio but needs to read time as well, it will require 2 context switches, which will pretty much always be more costly than just disabling preemption for the duration of the reads. Moreover, with the code logic recently added to get_systime(), disabling preemption is not even required anymore: reads and writes just need to be protected from each other, to prevent a clock read while the clock is being updated. Improve the above situation by replacing the PTP spinlock by a rwlock, and using read_lock for PTP clock reads so simultaneous reads do not block each other. Signed-off-by: Yannick Vignon <yannick.vignon@nxp.com> Link: https://lore.kernel.org/r/20220204135545.2770625-1-yannick.vignon@oss.nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-07net: typhoon: include <net/vxlan.h>Eric Dumazet
We need this to get vxlan_features_check() definition. Fixes: d2692eee05b8 ("net: typhoon: implement ndo_features_check method") Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20220208003502.1799728-1-eric.dumazet@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-02-07igb: refactor XDP registrationCorinna Vinschen
On changing the RX ring parameters igb uses a hack to avoid a warning when calling xdp_rxq_info_reg via igb_setup_rx_resources. It just clears the struct xdp_rxq_info content. Instead, change this to unregister if we're already registered. Align code to the igc code. Fixes: 9cbc948b5a20c ("igb: add XDP support") Signed-off-by: Corinna Vinschen <vinschen@redhat.com> Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-07igc: avoid kernel warning when changing RX ring parametersCorinna Vinschen
Calling ethtool changing the RX ring parameters like this: $ ethtool -G eth0 rx 1024 on igc triggers kernel warnings like this: [ 225.198467] ------------[ cut here ]------------ [ 225.198473] Missing unregister, handled but fix driver [ 225.198485] WARNING: CPU: 7 PID: 959 at net/core/xdp.c:168 xdp_rxq_info_reg+0x79/0xd0 [...] [ 225.198601] Call Trace: [ 225.198604] <TASK> [ 225.198609] igc_setup_rx_resources+0x3f/0xe0 [igc] [ 225.198617] igc_ethtool_set_ringparam+0x30e/0x450 [igc] [ 225.198626] ethnl_set_rings+0x18a/0x250 [ 225.198631] genl_family_rcv_msg_doit+0xca/0x110 [ 225.198637] genl_rcv_msg+0xce/0x1c0 [ 225.198640] ? rings_prepare_data+0x60/0x60 [ 225.198644] ? genl_get_cmd+0xd0/0xd0 [ 225.198647] netlink_rcv_skb+0x4e/0xf0 [ 225.198652] genl_rcv+0x24/0x40 [ 225.198655] netlink_unicast+0x20e/0x330 [ 225.198659] netlink_sendmsg+0x23f/0x480 [ 225.198663] sock_sendmsg+0x5b/0x60 [ 225.198667] __sys_sendto+0xf0/0x160 [ 225.198671] ? handle_mm_fault+0xb2/0x280 [ 225.198676] ? do_user_addr_fault+0x1eb/0x690 [ 225.198680] __x64_sys_sendto+0x20/0x30 [ 225.198683] do_syscall_64+0x38/0x90 [ 225.198687] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 225.198693] RIP: 0033:0x7f7ae38ac3aa igc_ethtool_set_ringparam() copies the igc_ring structure but neglects to reset the xdp_rxq_info member before calling igc_setup_rx_resources(). This in turn calls xdp_rxq_info_reg() with an already registered xdp_rxq_info. Make sure to unregister the xdp_rxq_info structure first in igc_setup_rx_resources. Fixes: 73f1071c1d29 ("igc: Add support for XDP_TX action") Reported-by: Lennert Buytenhek <buytenh@arista.com> Signed-off-by: Corinna Vinschen <vinschen@redhat.com> Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Tested-by: Dvora Fuxbrumer <dvorax.fuxbrumer@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-07net: dsa: mv88e6xxx: Unlock on error in mv88e6xxx_port_bridge_join()Dan Carpenter
Call mv88e6xxx_reg_unlock(chip) before returning on this error path. Fixes: 7af4a361a62f ("net: dsa: mv88e6xxx: Improve isolation of standalone ports") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-07net: dsa: mv88e6xxx: Fix off by in one in mv88e6185_phylink_get_caps()Dan Carpenter
The <= ARRAY_SIZE() needs to be < ARRAY_SIZE() to prevent an out of bounds error. Fixes: d4ebf12bcec4 ("net: dsa: mv88e6xxx: populate supported_interfaces and mac_capabilities") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-07net: hns3: add support for TX push modeYufeng Mo
For the device that supports the TX push capability, the BD can be directly copied to the device memory. However, due to hardware restrictions, the push mode can be used only when there are no more than two BDs, otherwise, the doorbell mode based on device memory is used. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-07net: asix: add proper error handling of usb read errorsPavel Skripkin
Syzbot once again hit uninit value in asix driver. The problem still the same -- asix_read_cmd() reads less bytes, than was requested by caller. Since all read requests are performed via asix_read_cmd() let's catch usb related error there and add __must_check notation to be sure all callers actually check return value. So, this patch adds sanity check inside asix_read_cmd(), that simply checks if bytes read are not less, than was requested and adds missing error handling of asix_read_cmd() all across the driver code. Fixes: d9fe64e51114 ("net: asix: Add in_pm parameter") Reported-and-tested-by: syzbot+6ca9f7867b77c2d316ac@syzkaller.appspotmail.com Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Tested-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-07r8169: factor out redundant RTL8168d PHY config functionality to ↵Heiner Kallweit
rtl8168d_1_common() rtl8168d_2_hw_phy_config() shares quite some functionality with rtl8168d_1_hw_phy_config(), so let's factor out the common part to a new function rtl8168d_1_common(). In addition improve the code a little. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-07mlxsw: Support FLOW_ACTION_MANGLE for SIP and DIP IPv6 addressesDanielle Ratson
Spectrum-2 supports an ACL action SIP_DIP, which allows IPv4 and IPv6 source and destination addresses change. Offload suitable mangles to the IPv6 address change action. Signed-off-by: Danielle Ratson <danieller@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-07mlxsw: Support FLOW_ACTION_MANGLE for SIP and DIP IPv4 addressesDanielle Ratson
Spectrum-2 supports an ACL action SIP_DIP, which allows IPv4 and IPv6 source and destination addresses change. Offload suitable mangles to the IPv4 address change action. Signed-off-by: Danielle Ratson <danieller@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-07mlxsw: core_acl_flex_actions: Add SIP_DIP_ACTIONDanielle Ratson
Add fields related to SIP_DIP_ACTION, which is used for changing of SIP and DIP addresses. Signed-off-by: Danielle Ratson <danieller@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-05net: typhoon: implement ndo_features_check methodEric Dumazet
Instead of disabling TSO at compile time if MAX_SKB_FRAGS > 32, implement ndo_features_check() method for this driver for a more dynamic handling. If skb has more than 32 frags and is a GSO packet, force software segmentation. Most locally generated packets will use a small number of fragments anyway. For forwarding workloads, we can limit gro_max_size at ingress, we might also implement gro_max_segs if needed. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-05net: sundance: Replace one-element array with non-array objectGustavo A. R. Silva
It seems this one-element array is not actually being used as an array of variable size, so we can just replace it with just a non-array object of type struct desc_frag and refactor a bit the rest of the code. This helps with the ongoing efforts to globally enable -Warray-bounds and get us closer to being able to tighten the FORTIFY_SOURCE routines on memcpy(). This issue was found with the help of Coccinelle and audited and fixed, manually. [1] https://en.wikipedia.org/wiki/Flexible_array_member [2] https://www.kernel.org/doc/html/v5.16/process/deprecated.html#zero-length-and-one-element-arrays Link: https://github.com/KSPP/linux/issues/79 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-05bnx2x: Replace one-element array with flexible-array memberGustavo A. R. Silva
There is a regular need in the kernel to provide a way to declare having a dynamically sized set of trailing elements in a structure. Kernel code should always use “flexible array members”[1] for these cases. The older style of one-element or zero-length arrays should no longer be used[2]. This helps with the ongoing efforts to globally enable -Warray-bounds and get us closer to being able to tighten the FORTIFY_SOURCE routines on memcpy(). This issue was found with the help of Coccinelle and audited and fixed, manually. [1] https://en.wikipedia.org/wiki/Flexible_array_member [2] https://www.kernel.org/doc/html/v5.16/process/deprecated.html#zero-length-and-one-element-arrays Link: https://github.com/KSPP/linux/issues/79 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-05net: mana: Remove unnecessary check of cqe_type in mana_process_rx_cqe()Haiyang Zhang
The switch statement already ensures cqe_type == CQE_RX_OKAY at that point. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Dexuan Cui <decui@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-02-05net: mana: Add handling of CQE_RX_TRUNCATEDHaiyang Zhang
The proper way to drop this kind of CQE is advancing rxq tail without indicating the packet to the upper network layer. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by: Dexuan Cui <decui@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>