summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)Author
2022-09-28net/mlx5e: xsk: Fix SKB headroom calculation in validationMaxim Mikityanskiy
In a typical scenario, if an XSK socket is opened first, then an XDP program is attached, mlx5e_validate_xsk_param will be called twice: first on XSK bind, second on channel restart caused by enabling XDP. The validation includes a call to mlx5e_rx_is_linear_skb, which checks the presence of the XDP program. The above means that mlx5e_rx_is_linear_skb might return true the first time, but false the second time, as mlx5e_rx_get_linear_sz_skb's return value will increase, because of a different headroom used with XDP. As XSK RQs never exist without XDP, it would make sense to trick mlx5e_rx_get_linear_sz_skb into thinking XDP is enabled at the first check as well. This way, if MTU is too big, it would be detected on XSK bind, without giving false hope to the userspace application. However, it turns out that this check is too restrictive in the first place. SKBs created on XDP_PASS on XSK RQs don't have any headroom. That means that big MTUs filtered out on the first and the second checks might actually work. So, address this issue in the proper way, but taking into account the absence of the SKB headroom on XSK RQs, when calculating the buffer size. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: xsk: Remove dead code in validationMaxim Mikityanskiy
One of the checks in mlx5e_rx_is_linear_skb verifies that the RX buffer fits into the XSK frame size. Remove the duplicating check from mlx5e_validate_xsk_param. It allows to make mlx5e_rx_get_min_frag_sz static. Remove mlx5e_rx_is_xdp altogether, as its only usage is located in a branch where xsk == NULL. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: Simplify stride size calculation for linear RQMaxim Mikityanskiy
Linear RX buffers must be big enough to fit the MTU-sized packet along with the headroom. On the other hand, they must be small enough to fit into a page (or into an XSK frame). A straightforward way to check whether the linear mode is possible would be comparing the required buffer size to PAGE_SIZE or XSK frame size. Stride size in the linear mode is defined by the following constraints: 1. A stride is at least as big as the buffer size, and it's a power of two. 2. If non-XSK XDP is enabled, the stride size is PAGE_SIZE, because mlx5e requires each packet to be in its own page when XDP is in use. The previous constraint is automatically fulfilled, because buffer size can't be bigger than PAGE_SIZE. 3. XSK uses stride size equal to PAGE_SIZE, but the following commits will allow it to use roundup_pow_of_two(XSK frame size), by allowing the NIC's MMU to use page sizes not equal to the CPU page size. This commit puts the above requirements and constraints straight to the code in an attempt to simplify it and to prepare it for changes made in the next patches. For the reference, the old code uses an equivalent, but trickier calculation (high-level simplified pseudocode): if XDP or XSK: mlx5e_rx_get_linear_frag_sz := max(buffer size, PAGE_SIZE) else: mlx5e_rx_get_linear_frag_sz := buffer size mlx5e_rx_is_linear_skb := mlx5e_rx_get_linear_frag_sz <= PAGE_SIZE stride size := roundup_pow_of_two(mlx5e_rx_get_linear_frag_sz) The new code effectively removes mlx5e_rx_get_linear_frag_sz that used to return either buffer size or stride size, depending on the situation, making it hard to work with and to make changes: if XDP or XSK: mlx5e_rx_get_linear_stride_sz := PAGE_SIZE else mlx5e_rx_get_linear_stride_sz := roundup_pow_of_two(buffer size) mlx5e_rx_is_linear_skb := buffer size <= (PAGE_SIZE or XSK frame sz) stride size := mlx5e_rx_get_linear_stride_sz Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: kTLS, Check ICOSQ WQE size in advanceMaxim Mikityanskiy
Instead of WARNing in runtime when TLS offload WQEs posted to ICOSQ are over the hardware limit, check their size before enabling TLS RX offload, and block the offload if the condition fails. It also allows to drop a u16 field from struct mlx5e_icosq. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: Use the aligned max TX MPWQE sizeMaxim Mikityanskiy
TX MPWQE size is limited to the cacheline-aligned maximum. Use the same value for the stop room and the capability check. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: Fix a typo in mlx5e_xdp_mpwqe_is_fullMaxim Mikityanskiy
Fix a typo in the function name: mpqwe -> mpwqe (stands for multi-packet work queue element). Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: Use mlx5e_stop_room_for_max_wqe where appropriateMaxim Mikityanskiy
mlx5e_alloc_xdpsq calculates sq->stop_room internally, but there is already a function for that: mlx5e_stop_room_for_max_wqe. This commit makes use of this function. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: Let mlx5e_get_sw_max_sq_mpw_wqebbs accept mdevMaxim Mikityanskiy
To shorten and simplify code, let mlx5e_get_sw_max_sq_mpw_wqebbs accept mdev and derive max SQ WQEBBs from it. Also rename the function to a more generic name mlx5e_get_max_sq_aligned_wqebbs, because the following patches will use it in non-MPWQE contexts. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: Validate striding RQ before enabling XDPMaxim Mikityanskiy
Currently, the driver can silently fall back to legacy RQ after enabling XDP, even if striding RQ was active before. It happens when PAGE_SIZE is bigger than the maximum supported stride size. This commit changes this behavior to more straightforward: if an operation (enabling XDP) doesn't support the current parameters (striding RQ mode), it fails. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: Make mlx5e_verify_rx_mpwqe_strides staticMaxim Mikityanskiy
mlx5e_verify_rx_mpwqe_strides is only used in en/params.c, so it can be made static and removed from en/params.h. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: Remove unused fields from datapath structsMaxim Mikityanskiy
No need to keep max_sq_wqebbs in mlx5e_txqsq and mlx5e_xdpsq, as it's only used when allocating the queues. Removing an extra field reduces the struct size. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net/mlx5e: Convert mlx5e_get_max_sq_wqebbs to u8Maxim Mikityanskiy
The return value of mlx5e_get_max_sq_wqebbs is clamped down to MLX5_SEND_WQE_MAX_WQEBBS = 16, which fits into u8. This commit changes the return type of this function to u8 for stricter type safety. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28Merge branch 'mlx5-next' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux Saeed Mahameed says: ==================== updates from mlx5-next 2022-09-24 Updates form mlx5-next including[1]: 1) HW definitions and support for NPPS clock settings. 2) various cleanups 3) Enable hash mode by default for all NICs 4) page tracker and advanced virtualization HW definitions for vfio [1] https://lore.kernel.org/netdev/20220907233636.388475-1-saeed@kernel.org/ * 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux: net/mlx5: Remove from FPGA IFC file not-needed definitions net/mlx5: Remove unused structs net/mlx5: Remove unused functions net/mlx5: detect and enable bypass port select flow table net/mlx5: Lag, enable hash mode by default for all NICs net/mlx5: Lag, set active ports if support bypass port select flow table RDMA/mlx5: Don't set tx affinity when lag is in hash mode net/mlx5: add IFC bits for bypassing port select flow table net/mlx5: Add support for NPPS with real time mode net/mlx5: Expose NPPS related registers net/mlx5: Query ADV_VIRTUALIZATION capabilities net/mlx5: Introduce ifc bits for page tracker RDMA/mlx5: Move function mlx5_core_query_ib_ppcnt() to mlx5_ib ==================== Link: https://lore.kernel.org/all/20220927201906.234015-1-saeed@kernel.org/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net: sunhme: Fix undersized zeroing of quattro->happy_mealsSean Anderson
Just use kzalloc instead. Fixes: d6f1e89bdbb8 ("sunhme: Return an ERR_PTR from quattro_pci_find") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Sean Anderson <seanga2@gmail.com> Link: https://lore.kernel.org/r/20220928004157.279731-1-seanga2@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28Merge branch '100GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue Tony Nguyen says: ==================== ice: xsk: ZC changes Maciej Fijalkowski says: This set consists of two fixes to issues that were either pointed out on indirectly (John was reviewing AF_XDP selftests that were testing ice's ZC support) mailing list or were directly reported by customers. First patch allows user space to see done descriptor in CQ even after a single frame being transmitted and second patch removes the need for having HW rings sized to power of 2 number of descriptors when used against AF_XDP. I also forgot to mention that due to the current Tx cleaning algorithm, 4k HW ring was broken and these two patches bring it back to life, so we kill two birds with one stone. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue: ice: xsk: drop power of 2 ring size restriction for AF_XDP ice: xsk: change batched Tx descriptor cleaning ==================== Link: https://lore.kernel.org/r/20220927164112.4011983-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net: ethernet: mtk_eth_soc: fix mask of RX_DMA_GET_SPORT{,_V2}Daniel Golle
The bitmasks applied in RX_DMA_GET_SPORT and RX_DMA_GET_SPORT_V2 macros were swapped. Fix that. Reported-by: Chen Minqiang <ptpt52@gmail.com> Fixes: 160d3a9b192985 ("net: ethernet: mtk_eth_soc: introduce MTK_NETSYS_V2 support") Acked-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Daniel Golle <daniel@makrotopia.org> Link: https://lore.kernel.org/r/YzMW+mg9UsaCdKRQ@makrotopia.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net: mscc: ocelot: fix tagged VLAN refusal while under a VLAN-unaware bridgeVladimir Oltean
Currently the following set of commands fails: $ ip link add br0 type bridge # vlan_filtering 0 $ ip link set swp0 master br0 $ bridge vlan port vlan-id swp0 1 PVID Egress Untagged $ bridge vlan add dev swp0 vid 10 Error: mscc_ocelot_switch_lib: Port with more than one egress-untagged VLAN cannot have egress-tagged VLANs. Dumping ocelot->vlans, one can see that the 2 egress-untagged VLANs on swp0 are vid 1 (the bridge PVID) and vid 4094, a PVID used privately by the driver for VLAN-unaware bridging. So this is why bridge vid 10 is refused, despite 'bridge vlan' showing a single egress untagged VLAN. As mentioned in the comment added, having this private VLAN does not impose restrictions to the hardware configuration, yet it is a bookkeeping problem. There are 2 possible solutions. One is to make the functions that operate on VLAN-unaware pvids: - ocelot_add_vlan_unaware_pvid() - ocelot_del_vlan_unaware_pvid() - ocelot_port_setup_dsa_8021q_cpu() - ocelot_port_teardown_dsa_8021q_cpu() call something different than ocelot_vlan_member_(add|del)(), the latter being the real problem, because it allocates a struct ocelot_bridge_vlan *vlan which it adds to ocelot->vlans. We don't really *need* the private VLANs in ocelot->vlans, it's just that we have the extra convenience of having the vlan->portmask cached in software (whereas without these structures, we'd have to create a raw ocelot_vlant_rmw_mask() procedure which reads back the current port mask from hardware). The other solution is to filter out the private VLANs from ocelot_port_num_untagged_vlans(), since they aren't what callers care about. We only need to do this to the mentioned function and not to ocelot_port_num_tagged_vlans(), because private VLANs are never egress-tagged. Nothing else seems to be broken in either solution, but the first one requires more rework which will conflict with the net-next change 36a0bf443585 ("net: mscc: ocelot: set up tag_8021q CPU ports independent of user port affinity"), and I'd like to avoid that. So go with the other one. Fixes: 54c319846086 ("net: mscc: ocelot: enforce FDB isolation when VLAN-unaware") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20220927122042.1100231-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28net: drop the weight argument from netif_napi_addJakub Kicinski
We tell driver developers to always pass NAPI_POLL_WEIGHT as the weight to netif_napi_add(). This may be confusing to newcomers, drop the weight argument, those who really need to tweak the weight can use netif_napi_add_weight(). Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> # for CAN Link: https://lore.kernel.org/r/20220927132753.750069-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-28ice: Add support for VLAN priority filters in switchdevMartyna Szapar-Mudlaw
Enable support for adding TC rules that filter on the VLAN priority in switchdev mode. VLAN priority are the first 3 bits of 16b switch field vector word which contain also vlan id value within its last 12 bits. When getting vlan priority value from tc match.key it has to be shifted first to proper bits positions (by VLAN_PRIO_SHIFT) and then can be added to the joint 'vlan' field in ice_vlan_hdr in lookup element. The mask of lookup changes accordingly. 0x0FFF - when only vlan id is added in filter 0xE000 - when only vlan priority is added in filter 0xEFFF - when both these values are specified Signed-off-by: Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-09-28ice: support features on new E810T variantsArkadiusz Kubalewski
Add new sub-device ids required for proper initialization of features on E810T devices supported by ice driver. Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Tested-by: Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-09-28ice: Merge pin initialization of E810 and E810T adaptersArkadiusz Kubalewski
Remove separate function initializing pins for E810T-based adapters and initialize pins based on feature bits. Signed-off-by: Maciej Machnikowski <maciej.machnikowski@intel.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Tested-by: Gurucharan <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-09-28sfc: bare bones TC offload on EF100Edward Cree
This is the absolute minimum viable TC implementation to get traffic to VFs and allow them to be tested; it supports no match fields besides ingress port, no actions besides mirred and drop, and no stats. Example usage: tc filter add dev $PF parent ffff: flower skip_sw \ action mirred egress mirror dev $VFREP tc filter add dev $VFREP parent ffff: flower skip_sw \ action mirred egress redirect dev $PF gives a VF unfiltered access to the network out the physical port ($PF acts here as a physical port representor). More matches, actions, and counters will be added in subsequent patches. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-28sfc: interrogate MAE capabilities at probe timeEdward Cree
Different versions of EF100 firmware and FPGA bitstreams support different matching capabilities in the Match-Action Engine. Probe for these at start of day; subsequent patches will validate TC offload requests against the reported capabilities. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-28sfc: add a hashtable for offloaded TC rulesEdward Cree
Nothing inserts into this table yet, but we have code to remove rules on FLOW_CLS_DESTROY or at driver teardown time, in both cases also attempting to remove the corresponding hardware rules. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-28sfc: optional logging of TC offload errorsEdward Cree
TC offload support will involve complex limitations on what matches and actions a rule can do, in some cases potentially depending on rules already offloaded. So add an ethtool private flag "log-tc-errors" which controls reporting the reasons for un-offloadable TC rules at NETIF_INFO. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-28sfc: bind indirect blocks for TC offload on EF100Edward Cree
Bind indirect blocks for recognised tunnel netdevices. Currently these connect to a stub efx_tc_flower() that only returns -EOPNOTSUPP; subsequent patches will implement flower offloads to the Match-Action Engine. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-28sfc: bind blocks for TC offload on EF100Edward Cree
Bind direct blocks for the MAE-admin PF and each VF representor. Currently these connect to a stub efx_tc_flower() that only returns -EOPNOTSUPP; subsequent patches will implement flower offloads to the Match-Action Engine. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-28net: ethernet: rmnet: Replace zero-length array with DECLARE_FLEX_ARRAY() helperGustavo A. R. Silva
Zero-length arrays are deprecated and we are moving towards adopting C99 flexible-array members, instead. So, replace zero-length arrays declarations in anonymous union with the new DECLARE_FLEX_ARRAY() helper macro. This helper allows for flexible-array members in unions. Link: https://github.com/KSPP/linux/issues/193 Link: https://github.com/KSPP/linux/issues/221 Link: https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-28net: lan966x: Add offload support for etsHoratiu Vultur
Add ets qdisc which allows to mix strict priority with bandwidth-sharing bands. The ets qdisc needs to be attached as root qdisc. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-28net: lan966x: Add offload support for cbsHoratiu Vultur
Lan966x switch supports credit based shaper in hardware according to IEEE Std 802.1Q-2018 Section 8.6.8.2. Add support for cbs configuration on egress port of lan966x switch. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-28net: lan966x: Add offload support for tbfHoratiu Vultur
The tbf qdisc allows to attach a shaper on traffic egress on a port or on a queue. On port they are attached directly to the root and on queue they are attached on one of the classes of the parent qdisc. Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-27mlxsw: core_acl_flex_actions: Split memcpy() of struct flow_action_cookie ↵Kees Cook
flexible array To work around a misbehavior of the compiler's ability to see into composite flexible array structs (as detailed in the coming memcpy() hardening series[1]), split the memcpy() of the header and the payload so no false positive run-time overflow warning will be generated. [1] https://lore.kernel.org/linux-hardening/20220901065914.1417829-2-keescook@chromium.org Cc: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/20220927004033.1942992-1-keescook@chromium.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27net: stmmac: Minor spell fix related to 'stmmac_clk_csr_set()'Bhupesh Sharma
Minor spell fix related to 'stmmac_clk_csr_set()' inside a comment used in the 'stmmac_probe_config_dt()' function. Cc: Biao Huang <biao.huang@mediatek.com> Signed-off-by: Bhupesh Sharma <bhupesh.sharma@linaro.org> Link: https://lore.kernel.org/r/20220924104514.1666947-1-bhupesh.sharma@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27net/mlx5: Remove unused structsGal Pressman
Remove structs which are no longer used in the driver: mlx5dr_cmd_qp_create_attr mlx5_fs_dr_ns mlx5_pas Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-27net/mlx5: Remove unused functionsGal Pressman
Remove functions which are no longer used in the driver: mlx5e_ipsec_is_tx_flow mlx5_health_flush get_cqe_enhanced_num_mini_cqes get_cqe_l3_hdr_type mlx5_health_flush mlx5_fs_is_ipsec_flow _mlx5_fs_is_outer_ipproto_flow mlx5_fs_is_outer_tcp_flow mlx5_fs_is_outer_udp_flow mlx5_fs_is_vxlan_flow mlx5_fs_is_outer_ipsec_flow Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-27net/mlx5: detect and enable bypass port select flow tableLiu, Changcheng
Use port selection capability port_select_flow_table_bypass bit to detect and enable explicit port affinity even when in lag hash mode. Signed-off-by: Liu, Changcheng <jerrliu@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-27net/mlx5: Lag, enable hash mode by default for all NICsLiu, Changcheng
The firmware supports adding a steering rule to catch egress traffic of the QPs/TISs which are set port affinity explicitly in hash mode. Enable that mode for NICS with 2 ports as well. Signed-off-by: Liu, Changcheng <jerrliu@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-27net/mlx5: Lag, set active ports if support bypass port select flow tableLiu, Changcheng
active_port bit mask indicates the current active ports. Set bit indicates the port is active. Update active ports info to FW to redirect the QP/TIS from inactive ports to other ports. Signed-off-by: Liu, Changcheng <jerrliu@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-27RDMA/mlx5: Don't set tx affinity when lag is in hash modeLiu, Changcheng
In hash mode, without setting tx affinity explicitly, the port select flow table decides which port is used for the traffic. If port_select_flow_table_bypass capability is supported and tx affinity is set explicitly for QP/TIS, they will be added into the explicit affinity table in FW to check which port is used for the traffic. 1. The overloaded explicit affinity table may affect performance. To avoid this, do not set tx affinity explicitly by default. 2. The packets of the same flow need to be transmitted on the same port. Because the packets of the same flow use different QPs in slow & fast path, it shouldn't set tx affinity explicitly for these QPs. Signed-off-by: Liu, Changcheng <jerrliu@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-27net/mlx5: Add support for NPPS with real time modeAya Levin
Add support for setting NPPS. NPPS is currently available in REAL_TIME_CLOCK mode only. In addition allow the user to set the pulse duration. When NPPS pulse duration is not set explicitly by the user, driver set it to 50% of the NPPS period. Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Eran Ben Elisha <eranbe@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-27Merge branch 'master' into i2c/for-mergewindowWolfram Sang
2022-09-27ice: xsk: drop power of 2 ring size restriction for AF_XDPMaciej Fijalkowski
We had multiple customers in the past months that reported commit 296f13ff3854 ("ice: xsk: Force rings to be sized to power of 2") makes them unable to use ring size of 8160 in conjunction with AF_XDP. Remove this restriction. Fixes: 296f13ff3854 ("ice: xsk: Force rings to be sized to power of 2") CC: Alasdair McWilliam <alasdair.mcwilliam@outlook.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-09-27ice: xsk: change batched Tx descriptor cleaningMaciej Fijalkowski
AF_XDP Tx descriptor cleaning in ice driver currently works in a "lazy" way - descriptors are not cleaned immediately after send. We rather hold on with cleaning until we see that free space in ring drops below particular threshold. This was supposed to reduce the amount of unnecessary work related to cleaning and instead of keeping the ring empty, ring was rather saturated. In AF_XDP realm cleaning Tx descriptors implies producing them to CQ. This is a way of letting know user space that particular descriptor has been sent, as John points out in [0]. We tried to implement serial descriptor cleaning which would be used in conjunction with batched cleaning but it made code base more convoluted and probably harder to maintain in future. Therefore we step away from batched cleaning in a current form in favor of an approach where we set RS bit on every last descriptor from a batch and clean always at the beginning of ice_xmit_zc(). This means that we give up a bit of Tx performance, but this doesn't hurt l2fwd scenario which is way more meaningful than txonly as this can be treaten as AF_XDP based packet generator. l2fwd is not hurt due to the fact that Tx side is much faster than Rx and Rx is the one that has to catch Tx up. FWIW Tx descriptors are still produced in a batched way. [0]: https://lore.kernel.org/bpf/62b0a20232920_3573208ab@john.notmuch/ Fixes: 126cdfe1007a ("ice: xsk: Improve AF_XDP ZC Tx and use batching API") Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-09-27ionic: change order of devlink port register and netdev registerJiri Pirko
Make sure that devlink port is registered first and register netdev after. Unregister netdev before devlnk port unregister. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Acked-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27ice: reorder PF/representor devlink port register/unregister flowsJiri Pirko
Make sure that netdevice is registered/unregistered while devlink port is registered. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27funeth: unregister devlink port after netdevice unregisterJiri Pirko
Fix the order of destroy_netdev() flow and unregister the devlink port after calling unregister_netdev(). Signed-off-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27net: ethernet: mtk_eth_soc: fix usage of foe_entry_sizeDaniel Golle
As sizeof(hwe->data) can now longer be used as the actual size depends on foe_entry_size, in commit 9d8cb4c096ab02 ("net: ethernet: mtk_eth_soc: add foe_entry_size to mtk_eth_soc") the use of sizeof(hwe->data) is hence replaced. However, replacing it with ppe->eth->soc->foe_entry_size is wrong as foe_entry_size represents the size of the whole descriptor and not just the 'data' field. Fix this by subtracing the size of the only other field in the struct 'ib1', so we actually end up with the correct size to be copied to the data field. Reported-by: Chen Minqiang <ptpt52@gmail.com> Fixes: 9d8cb4c096ab02 ("net: ethernet: mtk_eth_soc: add foe_entry_size to mtk_eth_soc") Signed-off-by: Daniel Golle <daniel@makrotopia.org> Acked-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://lore.kernel.org/r/YzBqPIgQR2gLrPoK@makrotopia.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27net: ethernet: mtk_eth_soc: fix wrong use of new helper functionDaniel Golle
In function mtk_foe_entry_set_vlan() the call to field accessor macro FIELD_GET(MTK_FOE_IB1_BIND_VLAN_LAYER, entry->ib1) has been wrongly replaced by mtk_prep_ib1_vlan_layer(eth, entry->ib1) Use correct helper function mtk_get_ib1_vlan_layer instead. Reported-by: Chen Minqiang <ptpt52@gmail.com> Fixes: 03a3180e5c09e1 ("net: ethernet: mtk_eth_soc: introduce flow offloading support for mt7986") Signed-off-by: Daniel Golle <daniel@makrotopia.org> Acked-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://lore.kernel.org/r/YzBp+Kk04CFDys4L@makrotopia.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-27Merge branch 'mlx5-vfio' into mlx5-nextLeon Romanovsky
Merge net/mlx5 dependencies for device DMA logging. Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2022-09-27net: stmmac: power up/down serdes in stmmac_open/releaseJunxiao Chang
This commit fixes DMA engine reset timeout issue in suspend/resume with ADLink I-Pi SMARC Plus board which dmesg shows: ... [ 54.678271] PM: suspend exit [ 54.754066] intel-eth-pci 0000:00:1d.2 enp0s29f2: PHY [stmmac-3:01] driver [Maxlinear Ethernet GPY215B] (irq=POLL) [ 54.755808] intel-eth-pci 0000:00:1d.2 enp0s29f2: Register MEM_TYPE_PAGE_POOL RxQ-0 ... [ 54.780482] intel-eth-pci 0000:00:1d.2 enp0s29f2: Register MEM_TYPE_PAGE_POOL RxQ-7 [ 55.784098] intel-eth-pci 0000:00:1d.2: Failed to reset the dma [ 55.784111] intel-eth-pci 0000:00:1d.2 enp0s29f2: stmmac_hw_setup: DMA engine initialization failed [ 55.784115] intel-eth-pci 0000:00:1d.2 enp0s29f2: stmmac_open: Hw setup failed ... The issue is related with serdes which impacts clock. There is serdes in ADLink I-Pi SMARC board ethernet controller. Please refer to commit b9663b7ca6ff78 ("net: stmmac: Enable SERDES power up/down sequence") for detial. When issue is reproduced, DMA engine clock is not ready because serdes is not powered up. To reproduce DMA engine reset timeout issue with hardware which has serdes in GBE controller, install Ubuntu. In Ubuntu GUI, click "Power Off/Log Out" -> "Suspend" menu, it disables network interface, then goes to sleep mode. When it wakes up, it enables network interface again. Stmmac driver is called in this way: 1. stmmac_release: Stop network interface. In this function, it disables DMA engine and network interface; 2. stmmac_suspend: It is called in kernel suspend flow. But because network interface has been disabled(netif_running(ndev) is false), it does nothing and returns directly; 3. System goes into S3 or S0ix state. Some time later, system is waken up by keyboard or mouse; 4. stmmac_resume: It does nothing because network interface has been disabled; 5. stmmac_open: It is called to enable network interace again. DMA engine is initialized in this API, but serdes is not power on so there will be DMA engine reset timeout issue. Similarly, serdes powerdown should be added in stmmac_release. Network interface might be disabled by cmd "ifconfig eth0 down", DMA engine, phy and mac have been disabled in ndo_stop callback, serdes should be powered down as well. It doesn't make sense that serdes is on while other components have been turned off. If ethernet interface is in enabled state(netif_running(ndev) is true) before suspend/resume, the issue couldn't be reproduced because serdes could be powered up in stmmac_resume. Because serdes_powerup is added in stmmac_open, it doesn't need to be called in probe function. Fixes: b9663b7ca6ff78 ("net: stmmac: Enable SERDES power up/down sequence") Signed-off-by: Junxiao Chang <junxiao.chang@intel.com> Reviewed-by: Voon Weifeng <weifeng.voon@intel.com> Tested-by: Jimmy JS Chen <jimmyjs.chen@adlinktech.com> Tested-by: Looi, Hong Aun <hong.aun.looi@intel.com> Link: https://lore.kernel.org/r/20220923050448.1220250-1-junxiao.chang@intel.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>