summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)Author
2023-03-30smsc911x: remove superfluous variable initWolfram Sang
phydev is assigned a value right away, no need to initialize it. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20230329064414.25028-1-wsa+renesas@sang-engineering.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-03-29Merge tag 'mlx5-updates-2023-03-28' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2023-03-28 Dragos Tatulea says: ==================== net/mlx5e: RX, Drop page_cache and fully use page_pool For page allocation on the rx path, the mlx5e driver has been using an internal page cache in tandem with the page pool. The internal page cache uses a queue for page recycling which has the issue of head of queue blocking. This patch series drops the internal page_cache altogether and uses the page_pool to implement everything that was done by the page_cache before: * Let the page_pool handle dma mapping and unmapping. * Use fragmented pages with fragment counter instead of tracking via page ref. * Enable skb recycling. The patch series has the following effects on the rx path: * Improved performance for the cases when there was low page recycling due to head of queue blocking in the internal page_cache. The test for this was running a single iperf TCP stream to a rx queue which is bound on the same cpu as the application. |-------------+--------+--------+------+---------| | rq type | before | after | unit | diff | |-------------+--------+--------+------+---------| | striding rq | 30.1 | 31.4 | Gbps | 4.14 % | | legacy rq | 30.2 | 33.0 | Gbps | 8.48 % | |-------------+--------+--------+------+---------| * Small XDP performance degradation. The test was is XDP drop program running on a single rx queue with small packets incoming it looks like this: |-------------+----------+----------+------+---------| | rq type | before | after | unit | diff | |-------------+----------+----------+------+---------| | striding rq | 19725449 | 18544617 | pps | -6.37 % | | legacy rq | 19879931 | 18631841 | pps | -6.70 % | |-------------+----------+----------+------+---------| This will be handled in a different patch series by adding support for multi-packet per page. * For other cases the performance is roughly the same. The above numbers were obtained on the following system: 24 core Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz 32 GB RAM ConnectX-7 single port The breakdown on the patch series is the following: * Preparations for introducing the mlx5e_frag_page struct. * Delete the mlx5e_page_cache struct. * Enable dma mapping from page_pool. * Enable skb recycling and fragment counting. * Do deferred release of pages (just before alloc) to ensure better page_pool cache utilization. ==================== * tag 'mlx5-updates-2023-03-28' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: net/mlx5e: RX, Remove unnecessary recycle parameter and page_cache stats net/mlx5e: RX, Break the wqe bulk refill in smaller chunks net/mlx5e: RX, Increase WQE bulk size for legacy rq net/mlx5e: RX, Split off release path for xsk buffers for legacy rq net/mlx5e: RX, Defer page release in legacy rq for better recycling net/mlx5e: RX, Change wqe last_in_page field from bool to bit flags net/mlx5e: RX, Defer page release in striding rq for better recycling net/mlx5e: RX, Rename xdp_xmit_bitmap to a more generic name net/mlx5e: RX, Enable skb page recycling through the page_pool net/mlx5e: RX, Enable dma map and sync from page_pool allocator net/mlx5e: RX, Remove internal page_cache net/mlx5e: RX, Store SHAMPO header pages in array net/mlx5e: RX, Remove alloc unit layout constraint for striding rq net/mlx5e: RX, Remove alloc unit layout constraint for legacy rq net/mlx5e: RX, Remove mlx5e_alloc_unit argument in page allocation ==================== Link: https://lore.kernel.org/r/20230328205623.142075-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-29net: ena: removed unused tx_bytes variableSimon Horman
clang 16.0.0 with W=1 reports: drivers/net/ethernet/amazon/ena/ena_netdev.c:1901:6: error: variable 'tx_bytes' set but not used [-Werror,-Wunused-but-set-variable] u32 tx_bytes = 0; The variable is not used so remove it. Signed-off-by: Simon Horman <horms@kernel.org> Acked-by: Shay Agroskin <shayagr@amazon.com> Link: https://lore.kernel.org/r/20230328151958.410687-1-horms@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-29octeon_ep: unlock the correct lock on error pathDan Carpenter
The h and the f letters are swapped so it unlocks the wrong lock. Fixes: 577f0d1b1c5f ("octeon_ep: add separate mailbox command and response queues") Signed-off-by: Dan Carpenter <error27@gmail.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/251aa2a2-913e-4868-aac9-0a90fc3eeeda@kili.mountain Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-29net: hns3: support wake on lan configuration and queryHao Lan
The HNS3 driver supports Wake-on-LAN, which can wake up the server from power off state to power on state by magic packet or magic security packet. ChangeLog: v1->v2: Deleted the debugfs function that overlaps with the ethtool function from suggestion of Andrew Lunn. v2->v3: Return the wol configuration stored in driver, suggested by Alexander H Duyck. v3->v4: Add a helper to go from netdev to the local struct, suggested by Simon Horman and Jakub Kicinski. Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Hao Lan <lanhao@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: add offloading of 'foreign' TC (decap) rulesEdward Cree
A 'foreign' rule is one for which the net_dev is not the sfc netdevice or any of its representors. The driver registers indirect flow blocks for tunnel netdevs so that it can offload decap rules. For example: tc filter add dev vxlan0 parent ffff: protocol ipv4 flower \ enc_src_ip 10.1.0.2 enc_dst_ip 10.1.0.1 \ enc_key_id 1000 enc_dst_port 4789 \ action tunnel_key unset \ action mirred egress redirect dev $REPRESENTOR When notified of a rule like this, register an encap match on the IP and dport tuple (creating an Outer Rule table entry) and insert an MAE action rule to perform the decapsulation and deliver to the representee. Moved efx_tc_delete_rule() below efx_tc_flower_release_encap_match() to avoid the need for a forward declaration. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: add code to register and unregister encap matchesEdward Cree
Add a hashtable to detect duplicate and conflicting matches. If match is not a duplicate, call MAE functions to add/remove it from OR table. Calling code not added yet, so mark the new functions as unused. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: add functions to insert encap matches into the MAEEdward Cree
An encap match corresponds to an entry in the exact-match Outer Rule table; the lookup response includes the encap type (protocol) allowing the hardware to continue parsing into the inner headers. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: handle enc keys in efx_tc_flower_parse_match()Edward Cree
Translate the fields from flow dissector into struct efx_tc_match. In efx_tc_flower_replace(), reject filters that match on them, because only 'foreign' filters (i.e. those for which the ingress dev is not the sfc netdev or any of its representors, e.g. a tunnel netdev) can use them. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: add notion of match on enc keys to MAE machineryEdward Cree
Extend the MAE caps check to validate that the hardware supports these outer-header matches where used by the driver. Extend efx_mae_populate_match_criteria() to fill in the outer rule ID and VNI match fields. Nothing yet populates these match fields, nor creates outer rules. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: document TC-to-EF100-MAE action translation conceptsEdward Cree
Includes an explanation of the lifetime of the 'cursor' action-set `act`. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29ipv6: Remove in6addr_any alternatives.Kuniyuki Iwashima
Some code defines the IPv6 wildcard address as a local variable and use it with memcmp() or ipv6_addr_equal(). Let's use in6addr_any and ipv6_addr_any() instead. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-28Merge tag 'mlx5-updates-2023-03-20' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2023-03-20 mlx5 dynamic msix This patch series adds support for dynamic msix vectors allocation in mlx5. Eli Cohen Says: ================ The following series of patches modifies mlx5_core to work with the dynamic MSIX API. Currently, mlx5_core allocates all the interrupt vectors it needs and distributes them amongst the consumers. With the introduction of dynamic MSIX support, which allows for allocation of interrupts more than once, we now allocate vectors as we need them. This allows other drivers running on top of mlx5_core to allocate interrupt vectors for their own use. An example for this is mlx5_vdpa, which uses these vectors to propagate interrupts directly from the hardware to the vCPU [1]. As a preparation for using this series, a use after free issue is fixed in lib/cpu_rmap.c and the allocator for rmap entries has been modified. A complementary API for irq_cpu_rmap_add() has also been introduced. [1] https://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux.git/patch/?id=0f2bf1fcae96a83b8c5581854713c9fc3407556e ================ * tag 'mlx5-updates-2023-03-20' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: net/mlx5: Provide external API for allocating vectors net/mlx5: Use one completion vector if eth is disabled net/mlx5: Refactor calculation of required completion vectors net/mlx5: Move devlink registration before mlx5_load net/mlx5: Use dynamic msix vectors allocation net/mlx5: Refactor completion irq request/release code net/mlx5: Improve naming of pci function vectors net/mlx5: Use newer affinity descriptor net/mlx5: Modify struct mlx5_irq to use struct msi_map net/mlx5: Fix wrong comment net/mlx5e: Coding style fix, add empty line lib: cpu_rmap: Add irq_cpu_rmap_remove to complement irq_cpu_rmap_add lib: cpu_rmap: Use allocator for rmap entries lib: cpu_rmap: Avoid use after free on rmap->obj array entries ==================== Link: https://lore.kernel.org/r/20230324231341.29808-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-28net: ethernet: 8390: axnet_cs: remove unused xfer_count variableTom Rix
clang with W=1 reports drivers/net/ethernet/8390/axnet_cs.c:653:9: error: variable 'xfer_count' set but not used [-Werror,-Wunused-but-set-variable] int xfer_count = count; ^ This variable is not used so remove it. Signed-off-by: Tom Rix <trix@redhat.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230327235423.1777590-1-trix@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-28Revert "sh_eth: remove open coded netif_running()"Wolfram Sang
This reverts commit ce1fdb065695f49ef6f126d35c1abbfe645d62d5. It turned out this actually introduces a race condition. netif_running() is not a suitable check for get_stats. Reported-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230327152112.15635-1-wsa+renesas@sang-engineering.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-28net/mlx5e: Fix build break on 32bitSaeed Mahameed
The cited commit caused the following build break in mlx5 due to a change in size of MAX_SKB_FRAGS. error: format '%lu' expects argument of type 'long unsigned int', but argument 7 has type 'unsigned int' [-Werror=format=] Fix this by explicit casting. Fixes: 3948b05950fd ("net: introduce a config option to tweak MAX_SKB_FRAGS") Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20230328200723.125122-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-28net/mlx5e: RX, Remove unnecessary recycle parameter and page_cache statsDragos Tatulea
The recycle parameter used during page release is no longer necessary: the page pool can detect when the page cannot be recycled to the cache or ring without any outside hint. The page pool will also take care of cleaning up after itself once all the inflight pages have been released. So no need to explicitly release pages to the system. Remove the internal page_cache stats as the mlx5e_page_cache struct no longer exists. Delete the documentation entries along with the stats. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Break the wqe bulk refill in smaller chunksDragos Tatulea
To avoid overflowing the page pool's cache, don't release the whole bulk which is usually larger than the cache refill size. Group release+alloc instead into cache refill units that allow releasing to the cache and then allocating from the cache. A refill_unit variable is added as a iteration unit over the wqe_bulk when doing release+alloc. For a single ring, single core, default MTU (1500) TCP stream test the number of pages allocated from the cache directly (rx_pp_recycle_cached) increases from 0% to 52%: +---------------------------------------------+ | Page Pool stats (/sec) | Before | After | +-------------------------+---------+---------+ |rx_pp_alloc_fast | 2145422 | 2193802 | |rx_pp_alloc_slow | 2 | 0 | |rx_pp_alloc_empty | 2 | 0 | |rx_pp_alloc_refill | 34059 | 16634 | |rx_pp_alloc_waive | 0 | 0 | |rx_pp_recycle_cached | 0 | 1145818 | |rx_pp_recycle_cache_full | 0 | 0 | |rx_pp_recycle_ring | 2179361 | 1064616 | |rx_pp_recycle_ring_full | 121 | 0 | +---------------------------------------------+ With this patch, the performance for legacy rq for the above test is back to baseline. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Increase WQE bulk size for legacy rqDragos Tatulea
Deferred page release was added to legacy rq but its desired effect (driver releases last fragment to page pool cache) is not yet visible due to the WQE bulks being too small. This patch increases the WQE bulk size to span 512 KB and clip it to one quarter of the rx queue size. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Split off release path for xsk buffers for legacy rqDragos Tatulea
Don't mix xsk buffer releases with page releases anymore. This is needed for handling of deferred page release. Add a new bulk free function for xsk buffers from wqe frags. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Defer page release in legacy rq for better recyclingDragos Tatulea
Currently, fragmented pages from the page pool can be released in two ways: 1) In the mlx5e driver when trimming off the unused fragments AND the associated skb fragments have been released. This path allows recycling of pages to the page pool cache (allow_direct == true). 2) On the skb release path (last fragment release), which will always release pages to the page pool ring (allow_direct == false). Whichever is releasing the last fragment will be decisive on where the page gets released: the cache or the ring. So we obviously want to maximize for doing the release from 1. This patch does that by deferring the release of page fragments right before requesting new ones from the page pool. A flag is added to make sure that there's no release before first alloc and that XDP_TX fragments are not released prematurely. This is a preparation patch that doesn't unlock the performance improvements yet. A followup patch will do that. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Change wqe last_in_page field from bool to bit flagsDragos Tatulea
Change the bool flag to a bitfield as we'll use it in a downstream patch in the series to add signaling about skipping a fragment release. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Defer page release in striding rq for better recyclingDragos Tatulea
Currently, for striding RQ, fragmented pages from the page pool can get released in two ways: 1) In the mlx5e driver when trimming off the unused fragments AND the associated skb fragments have been released. This path allows recycling of pages to the page pool cache (allow_direct == true). 2) On the skb release path (last fragment release), which will always release pages to the page pool ring (allow_direct == false). Whichever is releasing the last fragment will be decisive on where the page gets released: the cache or the ring. So we obviously want to maximize for doing the release from 1. This patch does that by deferring the release of page fragments right before requesting new ones from the page pool. Extra care needs to be taken for the corner cases: * On first call, make sure that release is not called. The skip_release_bitmap is used for this purpose. * On rq shutdown, make sure that all wqes that were not in the linked list are released. For a single ring, single core, default MTU (1500) TCP stream test the number of pages allocated from the cache directly (rx_pp_recycle_cached) increases from 31 % to 98 %: +----------------------------------------------+ | Page Pool stats (/sec) | Before | After | +-------------------------+---------+----------+ |rx_pp_alloc_fast | 2137754 | 2261033 | |rx_pp_alloc_slow | 47 | 9 | |rx_pp_alloc_empty | 47 | 9 | |rx_pp_alloc_refill | 23230 | 819 | |rx_pp_alloc_waive | 0 | 0 | |rx_pp_recycle_cached | 672182 | 2209015 | |rx_pp_recycle_cache_full | 1789 | 0 | |rx_pp_recycle_ring | 1485848 | 52259 | |rx_pp_recycle_ring_full | 3003 | 584 | +----------------------------------------------+ With this patch, the performance in striding rq for the above test is back to baseline. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Rename xdp_xmit_bitmap to a more generic nameDragos Tatulea
The xdp_xmit_bitmap currently serves only one purpose: to avoid releasing pages that are still in use due to XDP TX. A following patch will use this bitmap in a slightly different context but for the same purpose. So rename the bitmap to a more generic name that reflects the purpose not the context. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Enable skb page recycling through the page_poolDragos Tatulea
Start using the page_pool skb recycling api to recycle all pages back to the page pool and stop using atomic page reference counting. The mlx5e driver used to manage in-flight pages using page refcounting: for each fragment there were 2 atomic write operations happening (one for building the skb and one on skb release). The page_pool api introduced a method to track page fragments more optimally: * The page's pp_fragment_count is set to a large bias on page alloc (1 x atomic write operation). * The driver tracks the actual page fragments in a non atomic variable. * When the skb is recycled, pp_fragment_count is decremented (atomic write operation). * When page is released in the driver, the unused number of fragments (relative to the bias) is deducted from pp_fragment_count (atomic write operation). * Last page defragmentation will only be an atomic read. So in total there are `number of fragments + 1` atomic write ops. As opposed to previously: `2 * frags` atomic writes ops. Pages are wrapped in a mlx5e_frag_page structure which also contains the number of fragments. This makes it easy to count the fragments in the driver. This change brings performance improvements for the case when the old rx page_cache had low recycling rates due to head of queue blocking. For a iperf3 TCP test with a single stream, on a single core (iperf and receive queue running on same core), the following improvements can be noticed: * Striding rq: - before (net-next baseline): bitrate = 30.1 Gbits/sec - after : bitrate = 31.4 Gbits/sec (diff: 4.14 %) * Legacy rq: - before (net-next baseline): bitrate = 30.2 Gbits/sec - after : bitrate = 33.0 Gbits/sec (diff: 8.48 %) There are 2 temporary performance degradations introduced: 1) TCP streams that had a good recycling rate with the old page_cache have a degradation for both striding and linear rq. This is due to very low page pool cache recycling: the pages are released during skb recycle which will release pages to the page pool ring for safety. The following patches in this series will tackle this problem by deferring the page release in the driver to increase the chance of having pages recycled to the cache. 2) XDP performance is now lower (4-5 %) due to the higher number of atomic operations used for fragment management. But this opens the door for supporting multiple packets per page in XDP, which will bring a big gain. Otherwise, performance is similar to baseline. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Enable dma map and sync from page_pool allocatorDragos Tatulea
Remove driver dma mapping and unmapping of pages. Let the page_pool api do it. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Remove internal page_cacheDragos Tatulea
This patch removes the internal rx page_cache and uses the generic page_pool api only. It used to be that the page_pool couldn't handle all the mlx5 driver usecases, but with the introduction of skb recycling and page fragmentaton in the page_pool full switch can now be made. Some benfits of this transition: * Better page recycling in the cases when the page_cache was suffering from head of queue blocking. The page_pool doesn't have this issue. * DMA mapping/unmapping can be managed by the page_pool. * mlx5e_rq size reduced by more than 50% due to the page_cache array being deleted. This patch only removes the page_cache. Downstream patches will enable the required page_pool features and will add further fine-tuning. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Store SHAMPO header pages in arrayDragos Tatulea
Save allocated SHAMPO header pages to an array to which the mlx5e_dma_info page will point to. This change is a preparation for introducing mlx5e_frag_page structure in a downstream patch. There's no new functionality introduced. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Remove alloc unit layout constraint for striding rqDragos Tatulea
This change removes the usage of mlx5e_alloc_unit union for striding rq. The change is more straightforward than legacy rq as the alloc units union is already in place. This patch only moves things around: instead of an array of unions make it a union of arrays. This has the effect that each mlx5e_mpw_info will allocate the largest possible size of the array member. For xsk this means that the array of xdp_buff pointers for the wqe will still be contiguous, but there will be some extra unused bytes at the end of the array. As further patch in the series will add the mlx5e_frag_page struct for which the described size constraint will no longer hold. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Remove alloc unit layout constraint for legacy rqDragos Tatulea
The mlx5e_alloc_unit union is conveniently used to store arrays of pointers to struct page or struct xdp_buff (for xsk). The union is currently expected to have the size of a pointer for xsk batch allocations to work. This is conveniet for the current state of the code but makes it impossible to add a structure of a different size to the alloc unit. A further patch in the series will add the mlx5e_frag_page struct for which the described size constraint will no longer hold. This change removes the usage of mlx5e_alloc_unit union for legacy rq: - A union of arrays is introduced (mlx5e_alloc_units) to replace the array of unions to allow structures of different sizes. - Each fragment has a pointer to a unit in the mlx5e_alloc_units array. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Remove mlx5e_alloc_unit argument in page allocationDragos Tatulea
Change internal page cache and page pool api to use a struct page ** instead of a mlx5e_alloc_unit *. This is the first change in a series which is meant to remove the mlx5e_alloc_unit altogether. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net: ethernet: ti: am65-cpsw: enable p0 host port rx_vlan_remapGrygorii Strashko
By default, the tagged ingress packets to the switch from the host port P0 get internal switch priority assigned equal to the DMA CPPI channel number they came from, unless CPSW_P0_CONTROL_REG.RX_REMAP_VLAN is enabled. This causes issues with applying QoS policies and mapping packets on external port fifos, because the default configuration is vlan_aware and DMA CPPI channels are shared between all external ports. Hence enable CPSW_P0_CONTROL_REG.RX_REMAP_VLAN so packet will preserve internal switch priority assigned following the VLAN(priority) tag no matter through which DMA CPPI Channels packets enter the switch. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Link: https://lore.kernel.org/r/20230327092103.3256118-1-s-vadapalli@ti.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-03-28net: ethernet: ti: am65-cpsw: add .ndo to set dma per-queue rateGrygorii Strashko
Enable rate limiting TX DMA queues for CPSW interface by configuring the rate in absolute Mb/s units per TX queue. Example: ethtool -L eth0 tx 4 echo 100 > /sys/class/net/eth0/queues/tx-0/tx_maxrate echo 200 > /sys/class/net/eth0/queues/tx-1/tx_maxrate echo 50 > /sys/class/net/eth0/queues/tx-2/tx_maxrate echo 30 > /sys/class/net/eth0/queues/tx-3/tx_maxrate # disable echo 0 > /sys/class/net/eth0/queues/tx-0/tx_maxrate Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Link: https://lore.kernel.org/r/20230327085758.3237155-1-s-vadapalli@ti.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-03-27net: ena: Advertise TX push supportShay Agroskin
LLQ is auto enabled by the device and disabling it isn't supported on new ENA generations while on old ones causes sub-optimal performance. This patch adds advertisement of push-mode when LLQ mode is used, but rejects an attempt to modify it. Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-27net: ena: Add support to changing tx_push_buf_lenShay Agroskin
The ENA driver allows for two distinct values for the number of bytes of the packet's payload that can be written directly to the device. For a value of 224 the driver turns on Large LLQ Header mode in which the first 224 of the packet's payload are written to the LLQ. Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-27net: ena: Recalculate TX state variables every device resetShay Agroskin
With the ability to modify LLQ entry size, the size of packet's payload that can be written directly to the device changes. This patch makes the driver recalculate this information every device negotiation (also called device reset). Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-27net: ena: Add an option to configure large LLQ headersDavid Arinzon
Allow configuring the device with large LLQ headers. The Low Latency Queue (LLQ) allows the driver to write the first N bytes of the packet, along with the rest of the TX descriptors directly into device (N can be either 96 or 224 for large LLQ headers configuration). Having L4 TCP/UDP headers contained in the first 96 bytes of the packet is required to get maximum performance from the device. Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: David Arinzon <darinzon@amazon.com> Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-27net: ena: Make few cosmetic preparations to support large LLQShay Agroskin
Move ena_calc_io_queue_size() implementation closer to the file's beginning so that it can be later called from ena_device_init() function without adding a function declaration. Also add an empty line at some spots to separate logical blocks in funcitons. Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-27qed: remove unused num_ooo_add_to_peninsula variableTom Rix
clang with W=1 reports drivers/net/ethernet/qlogic/qed/qed_ll2.c:649:6: error: variable 'num_ooo_add_to_peninsula' set but not used [-Werror,-Wunused-but-set-variable] u32 num_ooo_add_to_peninsula = 0, cid; ^ This variable is not used so remove it. Signed-off-by: Tom Rix <trix@redhat.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230326001733.1343274-1-trix@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-27net: sunhme: Consolidate common probe tasksSean Anderson
Most of the second half of the PCI/SBUS probe functions are the same. Consolidate them into a common function. Signed-off-by: Sean Anderson <seanga2@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-27net: sunhme: Inline error returnsSean Anderson
The err_out label used to have cleanup. Now that it just returns, inline it everywhere. Signed-off-by: Sean Anderson <seanga2@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-27net: sunhme: Clean up mac address initSean Anderson
Clean up some oddities suggested during review. Signed-off-by: Sean Anderson <seanga2@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-27net: sunhme: Consolidate mac address initializationSean Anderson
The mac address initialization is braodly the same between PCI and SBUS, and one was clearly copied from the other. Consolidate them. We still have to have some ifdefs because pci_(un)map_rom is only implemented for PCI, and idprom is only implemented for SPARC. Signed-off-by: Sean Anderson <seanga2@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-27net: sunhme: Switch SBUS to devresSean Anderson
The PCI half of this driver was converted in commit 914d9b2711dd ("sunhme: switch to devres"). Do the same for the SBUS half. Signed-off-by: Sean Anderson <seanga2@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-27net: sunhme: Alphabetize includesSean Anderson
Alphabetize includes to make it clearer where to add new ones. Signed-off-by: Sean Anderson <seanga2@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-27net: sunhme: Unify IRQ requestingSean Anderson
Instead of registering one interrupt handler for all four SBUS Quattro HMEs, let each HME register its own handler. To make this work, we don't handle the IRQ if none of the status bits are set. This reduces the complexity of the driver, and makes it easier to ensure things happen before/after enabling IRQs. I'm not really sure why we request IRQs in two different places (and leave them running after removing the driver!). A lot of things in this driver seem to just be crusty, and not necessarily intentional. I'm assuming that's the case here as well. This really needs to be tested by someone with an SBUS Quattro card. Signed-off-by: Sean Anderson <seanga2@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-27net: sunhme: Remove residual polling codeSean Anderson
The sunhme driver never used the hardware MII polling feature. Even the if-def'd out happy_meal_poll_start was removed by 2002 [1]. Remove the various places in the driver which needlessly guard against MII interrupts which will never be enabled. [1] https://lwn.net/2002/0411/a/2.5.8-pre3.php3 Signed-off-by: Sean Anderson <seanga2@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-27net: sunhme: Just restart autonegotiation if we can't bring the link upSean Anderson
If we've tried regular autonegotiation and forcing the link mode, just restart autonegotiation instead of reinitializing the whole NIC. Signed-off-by: Sean Anderson <seanga2@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-27net: sunhme: Fix uninitialized return codeSean Anderson
Fix an uninitialized return code if we never found a qfe slot. It would be a bug if we ever got into this situation, but it's good to return something tracable. Fixes: acb3f35f920b ("sunhme: forward the error code from pci_enable_device()") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Sean Anderson <seanga2@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-27octeon_ep: add heartbeat monitorVeerasenareddy Burru
Monitor periodic heartbeat messages from device firmware. Presence of heartbeat indicates the device is active and running. If the heartbeat is missed for configured interval indicates firmware has crashed and device is unusable; in this case, PF driver stops and uninitialize the device. Signed-off-by: Veerasenareddy Burru <vburru@marvell.com> Signed-off-by: Abhijit Ayarekar <aayarekar@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>