summaryrefslogtreecommitdiff
path: root/drivers/net
AgeCommit message (Collapse)Author
2022-01-06Revert "net/mlx5: Add retry mechanism to the command entry index allocation"Moshe Shemesh
This reverts commit 410bd754cd73c4a2ac3856d9a03d7b08f9c906bf. The reverted commit had added a retry mechanism to the command entry index allocation. The previous patch ensures that there is a free command entry index once the command work handler holds the command semaphore. Thus the retry mechanism is not needed. Fixes: 410bd754cd73 ("net/mlx5: Add retry mechanism to the command entry index allocation") Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Eran Ben Elisha <eranbe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Set command entry semaphore up once got index freeMoshe Shemesh
Avoid a race where command work handler may fail to allocate command entry index, by holding the command semaphore down till command entry index is being freed. Fixes: 410bd754cd73 ("net/mlx5: Add retry mechanism to the command entry index allocation") Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Eran Ben Elisha <eranbe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Sync VXLAN udp ports during uplink representor profile changeMaor Dickman
Currently during NIC profile disablement all VXLAN udp ports offloaded to the HW are flushed and during its enablement the driver send notification to the stack to inform the core that the entire UDP tunnel port state has been lost, uplink representor doesn't have the same behavior which can cause VXLAN udp ports offload to be in bad state while moving between modes while VXLAN interface exist. Fixed by aligning the uplink representor profile behavior to the NIC behavior. Fixes: 84db66124714 ("net/mlx5e: Move set vxlan nic info to profile init") Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Fix access to sf_dev_table on allocation failureShay Drory
Even when SF devices are supported, the SF device table allocation can still fail. In such case mlx5_sf_dev_supported still reports true, but SF device table is invalid. This can result in NULL table access. Hence, fix it by adding NULL table check. Fixes: 1958fc2f0712 ("net/mlx5: SF, Add auxiliary device driver") Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Fix matching on modified inner ip_ecn bitsPaul Blakey
Tunnel device follows RFC 6040, and during decapsulation inner ip_ecn might change depending on inner and outer ip_ecn as follows: +---------+----------------------------------------+ |Arriving | Arriving Outer Header | | Inner +---------+---------+---------+----------+ | Header | Not-ECT | ECT(0) | ECT(1) | CE | +---------+---------+---------+---------+----------+ | Not-ECT | Not-ECT | Not-ECT | Not-ECT | <drop> | | ECT(0) | ECT(0) | ECT(0) | ECT(1) | CE* | | ECT(1) | ECT(1) | ECT(1) | ECT(1)* | CE* | | CE | CE | CE | CE | CE | +---------+---------+---------+---------+----------+ Cells marked above are changed from original inner packet ip_ecn value. Tc then matches on the modified inner ip_ecn, but hw offload which matches the inner ip_ecn value before decap, will fail. Fix that by mapping all the cases of outer and inner ip_ecn matching, and only supporting cases where we know inner wouldn't be changed by decap, or in the outer ip_ecn=CE case, inner ip_ecn didn't matter. Fixes: bcef735c59f2 ("net/mlx5e: Offload TC matching on tos/ttl for ip tunnels") Signed-off-by: Paul Blakey <paulb@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Eli Cohen <elic@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06Revert "net/mlx5e: Block offload of outer header csum for GRE tunnel"Aya Levin
This reverts commit 54e1217b90486c94b26f24dcee1ee5ef5372f832. Although the NIC doesn't support offload of outer header CSUM, using gso_partial_features allows offloading the tunnel's segmentation. The driver relies on the stack CSUM calculation of the outer header. For this, NETIF_F_GSO_GRE_CSUM must be a member of the device's features. Fixes: 54e1217b9048 ("net/mlx5e: Block offload of outer header csum for GRE tunnel") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06Revert "net/mlx5e: Block offload of outer header csum for UDP tunnels"Aya Levin
This reverts commit 6d6727dddc7f93fcc155cb8d0c49c29ae0e71122. Although the NIC doesn't support offload of outer header CSUM, using gso_partial_features allows offloading the tunnel's segmentation. The driver relies on the stack CSUM calculation of the outer header. For this, NETIF_F_GSO_UDP_TUNNEL_CSUM must be a member of the device's features. Fixes: 6d6727dddc7f ("net/mlx5e: Block offload of outer header csum for UDP tunnels") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Don't block routes with nexthop objects in SWMaor Dickman
Routes with nexthop objects is currently not supported by multipath offload and any attempts to use it is blocked, however this also block adding SW routes with nexthop. Resolve this by returning NOTIFY_DONE instead of an error which will allow such a route to be created in SW but not offloaded. This fix also solve an issue which block adding such routes on different devices due to missing check if the route FIB device is one of multipath devices. Fixes: 6a87afc072c3 ("mlx5: Fail attempts to use routes with nexthop objects") Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Fix wrong usage of fib_info_nh when routes with nexthop objects ↵Maor Dickman
are used Creating routes with nexthop objects while in switchdev mode leads to access to un-allocated memory and trigger bellow call trace due to hitting WARN_ON. This is caused due to illegal usage of fib_info_nh in TC tunnel FIB event handling to resolve the FIB device while fib_info built in with nexthop. Fixed by ignoring attempts to use nexthop objects with routes until support can be properly added. WARNING: CPU: 1 PID: 1724 at include/net/nexthop.h:468 mlx5e_tc_tun_fib_event+0x448/0x570 [mlx5_core] CPU: 1 PID: 1724 Comm: ip Not tainted 5.15.0_for_upstream_min_debug_2021_11_09_02_04 #1 RIP: 0010:mlx5e_tc_tun_fib_event+0x448/0x570 [mlx5_core] RSP: 0018:ffff8881349f7910 EFLAGS: 00010202 RAX: ffff8881492f1980 RBX: ffff8881349f79e8 RCX: 0000000000000000 RDX: ffff8881349f79e8 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffff8881349f7950 R08: 00000000000000fe R09: 0000000000000001 R10: 0000000000000000 R11: 0000000000000000 R12: ffff88811e9d0000 R13: ffff88810eb62000 R14: ffff888106710268 R15: 0000000000000018 FS: 00007f1d5ca6e800(0000) GS:ffff88852c880000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007ffedba44ff8 CR3: 0000000129808004 CR4: 0000000000370ea0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> atomic_notifier_call_chain+0x42/0x60 call_fib_notifiers+0x21/0x40 fib_table_insert+0x479/0x6d0 ? try_charge_memcg+0x480/0x6d0 inet_rtm_newroute+0x65/0xb0 rtnetlink_rcv_msg+0x2af/0x360 ? page_add_file_rmap+0x13/0x130 ? do_set_pte+0xcd/0x120 ? rtnl_calcit.isra.0+0x120/0x120 netlink_rcv_skb+0x4e/0xf0 netlink_unicast+0x1ee/0x2b0 netlink_sendmsg+0x22e/0x460 sock_sendmsg+0x33/0x40 ____sys_sendmsg+0x1d1/0x1f0 ___sys_sendmsg+0xab/0xf0 ? __mod_memcg_lruvec_state+0x40/0x60 ? __mod_lruvec_page_state+0x95/0xd0 ? page_add_new_anon_rmap+0x4e/0xf0 ? __handle_mm_fault+0xec6/0x1470 __sys_sendmsg+0x51/0x90 ? internal_get_user_pages_fast+0x480/0xa10 do_syscall_64+0x3d/0x90 entry_SYSCALL_64_after_hwframe+0x44/0xae Fixes: 8914add2c9e5 ("net/mlx5e: Handle FIB events to update tunnel endpoint device") Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Fix nullptr on deleting mirroring ruleDima Chumak
Deleting a Tc rule with multiple outputs, one of which is internal port, like this one: tc filter del dev enp8s0f0_0 ingress protocol ip pref 5 flower \ dst_mac 0c:42:a1:d1:d0:88 \ src_mac e4:ea:09:08:00:02 \ action tunnel_key set \ src_ip 0.0.0.0 \ dst_ip 7.7.7.8 \ id 8 \ dst_port 4789 \ action mirred egress mirror dev vxlan_sys_4789 pipe \ action mirred egress redirect dev enp8s0f0_1 Triggers a call trace: BUG: kernel NULL pointer dereference, address: 0000000000000230 RIP: 0010:del_sw_hw_rule+0x2b/0x1f0 [mlx5_core] Call Trace: tree_remove_node+0x16/0x30 [mlx5_core] mlx5_del_flow_rules+0x51/0x160 [mlx5_core] __mlx5_eswitch_del_rule+0x4b/0x170 [mlx5_core] mlx5e_tc_del_fdb_flow+0x295/0x550 [mlx5_core] mlx5e_flow_put+0x1f/0x70 [mlx5_core] mlx5e_delete_flower+0x286/0x390 [mlx5_core] tc_setup_cb_destroy+0xac/0x170 fl_hw_destroy_filter+0x94/0xc0 [cls_flower] __fl_delete+0x15e/0x170 [cls_flower] fl_delete+0x36/0x80 [cls_flower] tc_del_tfilter+0x3a6/0x6e0 rtnetlink_rcv_msg+0xe5/0x360 ? rtnl_calcit.isra.0+0x110/0x110 netlink_rcv_skb+0x46/0x110 netlink_unicast+0x16b/0x200 netlink_sendmsg+0x202/0x3d0 sock_sendmsg+0x33/0x40 ____sys_sendmsg+0x1c3/0x200 ? copy_msghdr_from_user+0xd6/0x150 ___sys_sendmsg+0x88/0xd0 ? ___sys_recvmsg+0x88/0xc0 ? do_futex+0x10c/0x460 __sys_sendmsg+0x59/0xa0 do_syscall_64+0x48/0x140 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fix by disabling offloading for flows matching esw_is_chain_src_port_rewrite() which have more than one output. Fixes: 10742efc20a4 ("net/mlx5e: VF tunnel TX traffic offloading") Signed-off-by: Dima Chumak <dchumak@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Fix page DMA map/unmap attributesAya Levin
Driver initiates DMA sync, hence it may skip CPU sync. Add DMA_ATTR_SKIP_CPU_SYNC as input attribute both to dma_map_page and dma_unmap_page to avoid redundant sync with the CPU. When forcing the device to work with SWIOTLB, the extra sync might cause data corruption. The driver unmaps the whole page while the hardware used just a part of the bounce buffer. So syncing overrides the entire page with bounce buffer that only partially contains real data. Fixes: bc77b240b3c5 ("net/mlx5e: Add fragmented memory support for RX multi packet WQE") Fixes: db05815b36cb ("net/mlx5e: Add XSK zero-copy support") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Add recovery flow in case of error CQEGal Pressman
The rep legacy RQ completion handling was missing the appropriate handling of error CQEs (dump the CQE and queue a recover work), fix it by calling trigger_report() when needed. Since all CQE handling flows do the exact same error CQE handling, extract it to a common helper function. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Aya Levin <ayal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: TC, Remove redundant error loggingRoi Dayan
Remove redundant and trivial error logging when trying to offload mirred device with unsupported devices. Using OVS could hit those a lot and the errors are still logged in extack. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Refactor set_pflag_cqe_based_moderSaeed Mahameed
Rearrange the code and use cqe_mode_to_period_mode() helper. Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Move HW-GRO and CQE compression check to fix features flowGal Pressman
Feature dependencies should be resolved in fix features rather than in set features flow. Move the check that disables HW-GRO in case CQE compression is enabled from set_feature_hw_gro() to mlx5e_fix_features(). Signed-off-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Fix feature check per profileAya Levin
Remove redundant space when constructing the feature's enum. Validate against the indented enum value. Fixes: 6c72cb05d4b8 ("net/mlx5e: Use bitmap field for profile features") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Unblock setting vid 0 for VF in case PF isn't eswitch managerMaor Dickman
When using libvirt to passthrough VF to VM it will always set the VF vlan to 0 even if user didn’t request it, this will cause libvirt to fail to boot in case the PF isn't eswitch owner. Example of such case is the DPU host PF which isn't eswitch manager, so any attempt to passthrough VF of it using libvirt will fail. Fix it by not returning error in case set VF vlan is called with vid 0. Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5e: Expose FEC counters via ethtoolLama Kayal
Add FEC counters' statistics of corrected_blocks and uncorrectable_blocks, along with their lanes via ethtool. HW supports corrected_blocks and uncorrectable_blocks counters both for RS-FEC mode and FC-FEC mode. In FC mode these counters are accumulated per lane, while in RS mode the correction method crosses lanes, thus only total corrected_blocks and uncorrectable_blocks are reported in this mode. Signed-off-by: Lama Kayal <lkayal@nvidia.com> Reviewed-by: Gal Pressman <gal@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Update log_max_qp value to FW max capabilityMaher Sanalla
log_max_qp in driver's default profile #2 was set to 18, but FW actually supports 17 at the most - a situation that led to the concerning print when the driver is loaded: "log_max_qp value in current profile is 18, changing to HCA capabaility limit (17)" The expected behavior from mlx5_profile #2 is to match the maximum FW capability in regards to log_max_qp. Thus, log_max_qp in profile #2 is initialized to a defined static value (0xff) - which basically means that when loading this profile, log_max_qp value will be what the currently installed FW supports at most. Signed-off-by: Maher Sanalla <msanalla@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: SF, Use all available cpu for setting cpu affinityShay Drory
Currently all SFs are using the same CPUs. Spreading SF over CPUs, in round-robin manner, in order to achieve better distribution of the SFs over available CPUs. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Introduce API for bulk request and release of IRQsShay Drory
Currently IRQs are requested one by one. To balance spreading IRQs among cpus using such scheme requires remembering cpu mask for the cpus used for a given device. This complicates the IRQ allocation scheme in subsequent patch. Hence, prepare the code for bulk IRQs allocation. This enables spreading IRQs among cpus in subsequent patch. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Split irq_pool_affinity logic to new fileShay Drory
The downstream patches add more functionality to irq_pool_affinity. Move the irq_pool_affinity logic to a new file in order to ease the coding and maintenance of it. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Move affinity assignment into irq_requestShay Drory
Move affinity binding of the IRQ to irq_request function in order to bind the IRQ before inserting it to the xarray. After this change, the IRQ is ready for use when inserted to the xarray. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: Introduce control IRQ request APIShay Drory
Currently, IRQ layer have a separate flow for ctrl and comp IRQs, and the distinction between ctrl and comp IRQs is done in the IRQ layer. In order to ease the coding and maintenance of the IRQ layer, introduce a new API for requesting control IRQs - mlx5_ctrl_irq_request(struct mlx5_core_dev *dev). Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-06net/mlx5: mlx5e_hv_vhca_stats_create return type to voidSaeed Mahameed
Callers of this functions ignore its return value, as reported by Wang Qing, in one of the return paths, it returns positive values. Since return value is ignored anyways, void out the return type of the function. Reported-by: Wang Qing <wangqing@vivo.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-07lib/crypto: blake2s: include as built-inJason A. Donenfeld
In preparation for using blake2s in the RNG, we change the way that it is wired-in to the build system. Instead of using ifdefs to select the right symbol, we use weak symbols. And because ARM doesn't need the generic implementation, we make the generic one default only if an arch library doesn't need it already, and then have arch libraries that do need it opt-in. So that the arch libraries can remain tristate rather than bool, we then split the shash part from the glue code. Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: linux-kbuild@vger.kernel.org Cc: linux-crypto@vger.kernel.org Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-06ice: Use bitmap_free() to free bitmapChristophe JAILLET
kfree() and bitmap_free() are the same. But using the latter is more consistent when freeing memory allocated with bitmap_zalloc(). Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-06ice: Optimize a few bitmap operationsChristophe JAILLET
When a bitmap is local to a function, it is safe to use the non-atomic __[set|clear]_bit(). No concurrent accesses can occur. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-06ice: Slightly simply ice_find_free_recp_res_idxChristophe JAILLET
The 'possible_idx' bitmap is set just after it is zeroed, so we can save the first step. The 'free_idx' bitmap is used only at the end of the function as the result of a bitmap xor operation. So there is no need to explicitly zero it before. So, slightly simply the code and remove 2 useless 'bitmap_zero()' call Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-06ice: improve switchdev's slow-pathWojciech Drewek
In current switchdev implementation, every VF PR is assigned to individual ring on switchdev ctrl VSI. For slow-path traffic, there is a mapping VF->ring done in software based on src_vsi value (by calling ice_eswitch_get_target_netdev function). With this change, HW solution is introduced which is more efficient. For each VF, src MAC (VF's MAC) filter will be created, which forwards packets to the corresponding switchdev ctrl VSI queue based on src MAC address. This filter has to be removed and then replayed in case of resetting one VF. Keep information about this rule in repr->mac_rule, thanks to that we know which rule has to be removed and replayed for a given VF. In case of CORE/GLOBAL all rules are removed automatically. We have to take care of readding them. This is done by ice_replay_vsi_adv_rule. When driver leaves switchdev mode, remove all advanced rules from switchdev ctrl VSI. This is done by ice_rem_adv_rule_for_vsi. Flag repr->rule_added is needed because in some cases reset might be triggered before VF sends request to add MAC. Co-developed-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-06ice: replay advanced rules after resetVictor Raj
ice_replay_vsi_adv_rule will replay advanced rules for a given VSI. Exit this function when list of rules for given recipe is empty. Do not add rule when given vsi_handle does not match vsi_handle from the rule info. Use ICE_MAX_NUM_RECIPES instead of ICE_SW_LKUP_LAST in order to find advanced rules as well. Signed-off-by: Victor Raj <victor.raj@intel.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-06fsl/fman: Check for null pointer after calling devm_ioremapJiasheng Jiang
As the possible failure of the allocation, the devm_ioremap() may return NULL pointer. Take tgec_initialization() as an example. If allocation fails, the params->base_addr will be NULL pointer and will be assigned to tgec->regs in tgec_config(). Then it will cause the dereference of NULL pointer in set_mac_address(), which is called by tgec_init(). Therefore, it should be better to add the sanity check after the calling of the devm_ioremap(). Fixes: 3933961682a3 ("fsl/fman: Add FMan MAC driver") Signed-off-by: Jiasheng Jiang <jiasheng@iscas.ac.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06veth: Do not record rx queue hint in veth_xmitDaniel Borkmann
Laurent reported that they have seen a significant amount of TCP retransmissions at high throughput from applications residing in network namespaces talking to the outside world via veths. The drops were seen on the qdisc layer (fq_codel, as per systemd default) of the phys device such as ena or virtio_net due to all traffic hitting a _single_ TX queue _despite_ multi-queue device. (Note that the setup was _not_ using XDP on veths as the issue is generic.) More specifically, after edbea9220251 ("veth: Store queue_mapping independently of XDP prog presence") which made it all the way back to v4.19.184+, skb_record_rx_queue() would set skb->queue_mapping to 1 (given 1 RX and 1 TX queue by default for veths) instead of leaving at 0. This is eventually retained and callbacks like ena_select_queue() will also pick single queue via netdev_core_pick_tx()'s ndo_select_queue() once all the traffic is forwarded to that device via upper stack or other means. Similarly, for others not implementing ndo_select_queue() if XPS is disabled, netdev_pick_tx() might call into the skb_tx_hash() and check for prior skb_rx_queue_recorded() as well. In general, it is a _bad_ idea for virtual devices like veth to mess around with queue selection [by default]. Given dev->real_num_tx_queues is by default 1, the skb->queue_mapping was left untouched, and so prior to edbea9220251 the netdev_core_pick_tx() could do its job upon __dev_queue_xmit() on the phys device. Unbreak this and restore prior behavior by removing the skb_record_rx_queue() from veth_xmit() altogether. If the veth peer has an XDP program attached, then it would return the first RX queue index in xdp_md->rx_queue_index (unless configured in non-default manner). However, this is still better than breaking the generic case. Fixes: edbea9220251 ("veth: Store queue_mapping independently of XDP prog presence") Fixes: 638264dc9022 ("veth: Support per queue XDP ring") Reported-by: Laurent Bernaille <laurent.bernaille@datadoghq.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Cc: Toshiaki Makita <toshiaki.makita1@gmail.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Paolo Abeni <pabeni@redhat.com> Cc: John Fastabend <john.fastabend@gmail.com> Cc: Willem de Bruijn <willemb@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Toshiaki Makita <toshiaki.makita1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06ethernet: ibmveth: use default_groups in kobj_typeGreg Kroah-Hartman
There are currently 2 ways to create a set of sysfs files for a kobj_type, through the default_attrs field, and the default_groups field. Move the ibmveth sysfs code to use default_groups field which has been the preferred way since aa30f47cf666 ("kobject: Add support for default attribute groups to kobj_type") so that we can soon get rid of the obsolete default_attrs field. Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Cristobal Forno <cforno12@linux.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Cc: linuxppc-dev@lists.ozlabs.org Cc: netdev@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06rocker: fix a sleeping in atomic bugDan Carpenter
This code is holding the &ofdpa->flow_tbl_lock spinlock so it is not allowed to sleep. That means we have to pass the OFDPA_OP_FLAG_NOWAIT flag to ofdpa_flow_tbl_del(). Fixes: 936bd486564a ("rocker: use FIB notifications instead of switchdev calls") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06sfc: Use swap() instead of open coding itJiapeng Chong
Clean the following coccicheck warning: ./drivers/net/ethernet/sfc/efx_channels.c:870:36-37: WARNING opportunity for swap(). ./drivers/net/ethernet/sfc/efx_channels.c:824:36-37: WARNING opportunity for swap(). Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Acked-by: Martin Habets <habetsm.xilinx@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06net: macb: use .mac_select_pcs() interfaceRussell King (Oracle)
Convert the PCS selection to use mac_select_pcs, which allows the PCS to perform any validation it needs. We must use separate phylink_pcs instances for the USX and SGMII PCS, rather than just changing the "ops" pointer before re-setting it to phylink as this interface queries the PCS, rather than requesting it to be changed. Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-06ppp: ensure minimum packet size in ppp_write()Eric Dumazet
It seems pretty clear ppp layer assumed user space would always be kind to provide enough data in their write() to a ppp device. This patch makes sure user provides at least 2 bytes. It adds PPP_PROTO_LEN macro that could replace in net-next many occurrences of hard-coded 2 value. I replaced only one occurrence to ease backports to stable kernels. The bug manifests in the following report: BUG: KMSAN: uninit-value in ppp_send_frame+0x28d/0x27c0 drivers/net/ppp/ppp_generic.c:1740 ppp_send_frame+0x28d/0x27c0 drivers/net/ppp/ppp_generic.c:1740 __ppp_xmit_process+0x23e/0x4b0 drivers/net/ppp/ppp_generic.c:1640 ppp_xmit_process+0x1fe/0x480 drivers/net/ppp/ppp_generic.c:1661 ppp_write+0x5cb/0x5e0 drivers/net/ppp/ppp_generic.c:513 do_iter_write+0xb0c/0x1500 fs/read_write.c:853 vfs_writev fs/read_write.c:924 [inline] do_writev+0x645/0xe00 fs/read_write.c:967 __do_sys_writev fs/read_write.c:1040 [inline] __se_sys_writev fs/read_write.c:1037 [inline] __x64_sys_writev+0xe5/0x120 fs/read_write.c:1037 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x54/0xd0 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x44/0xae Uninit was created at: slab_post_alloc_hook mm/slab.h:524 [inline] slab_alloc_node mm/slub.c:3251 [inline] __kmalloc_node_track_caller+0xe0c/0x1510 mm/slub.c:4974 kmalloc_reserve net/core/skbuff.c:354 [inline] __alloc_skb+0x545/0xf90 net/core/skbuff.c:426 alloc_skb include/linux/skbuff.h:1126 [inline] ppp_write+0x11d/0x5e0 drivers/net/ppp/ppp_generic.c:501 do_iter_write+0xb0c/0x1500 fs/read_write.c:853 vfs_writev fs/read_write.c:924 [inline] do_writev+0x645/0xe00 fs/read_write.c:967 __do_sys_writev fs/read_write.c:1040 [inline] __se_sys_writev fs/read_write.c:1037 [inline] __x64_sys_writev+0xe5/0x120 fs/read_write.c:1037 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x54/0xd0 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x44/0xae Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Paul Mackerras <paulus@samba.org> Cc: linux-ppp@vger.kernel.org Reported-by: syzbot <syzkaller@googlegroups.com> Acked-by: Guillaume Nault <gnault@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-05net: lantiq_xrx200: convert to build_skbAleksander Jan Bajkowski
We can increase the efficiency of rx path by using buffers to receive packets then build SKBs around them just before passing into the network stack. In contrast, preallocating SKBs too early reduces CPU cache efficiency. NAT Performance results on BT Home Hub 5A (kernel 5.10.89, mtu 1500): Down Up Before 577 Mbps 648 Mbps After 624 Mbps 695 Mbps Signed-off-by: Aleksander Jan Bajkowski <olek2@wp.pl> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-05net: lantiq_xrx200: increase napi poll weigthAleksander Jan Bajkowski
NAT Performance results on BT Home Hub 5A (kernel 5.10.89, mtu 1500): Down Up Before 545 Mbps 625 Mbps After 577 Mbps 648 Mbps Signed-off-by: Aleksander Jan Bajkowski <olek2@wp.pl> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-05Merge tag 'linux-can-fixes-for-5.16-20220105' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can Marc Kleine-Budde says: ==================== pull-request: can 2022-01-05 It consists of 2 patches, both by me. The first one fixes the use of an uninitialized variable in the gs_usb driver the other one a skb_over_panic in the ISOTP stack in case of reception of too large ISOTP messages. * tag 'linux-can-fixes-for-5.16-20220105' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can: can: isotp: convert struct tpcon::{idx,len} to unsigned int can: gs_usb: fix use of uninitialized variable, detach device on reception of invalid USB data ==================== Link: https://lore.kernel.org/r/20220105205443.1274709-1-mkl@pengutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-05can: gs_usb: fix use of uninitialized variable, detach device on reception ↵Marc Kleine-Budde
of invalid USB data The received data contains the channel the received data is associated with. If the channel number is bigger than the actual number of channels assume broken or malicious USB device and shut it down. This fixes the error found by clang: | drivers/net/can/usb/gs_usb.c:386:6: error: variable 'dev' is used | uninitialized whenever 'if' condition is true | if (hf->channel >= GS_MAX_INTF) | ^~~~~~~~~~~~~~~~~~~~~~~~~~ | drivers/net/can/usb/gs_usb.c:474:10: note: uninitialized use occurs here | hf, dev->gs_hf_size, gs_usb_receive_bulk_callback, | ^~~ Link: https://lore.kernel.org/all/20211210091158.408326-1-mkl@pengutronix.de Fixes: d08e973a77d1 ("can: gs_usb: Added support for the GS_USB CAN devices") Cc: stable@vger.kernel.org Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2022-01-05net: gemini: allow any RGMII interface modeRussell King (Oracle)
The four RGMII interface modes take care of the required RGMII delay configuration at the PHY and should not be limited by the network MAC driver. Sadly, gemini was only permitting RGMII mode with no delays, which would require the required delay to be inserted via PCB tracking or by the MAC. However, there are designs that require the PHY to add the delay, which is impossible without Gemini permitting the other three PHY interface modes. Fix the driver to allow these. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Link: https://lore.kernel.org/r/E1n4mpT-002PLd-Ha@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-05net: phy: marvell: configure RGMII delays for 88E1118Russell King (Oracle)
Corentin Labbe reports that the SSI 1328 does not work when allowing the PHY to operate at gigabit speeds, but does work with the generic PHY driver. This appears to be because m88e1118_config_init() writes a fixed value to the MSCR register, claiming that this is to enable 1G speeds. However, this always sets bits 4 and 5, enabling RGMII transmit and receive delays. The suspicion is that the original board this was added for required the delays to make 1G speeds work. Add the necessary configuration for RGMII delays for the 88E1118 to bring this into line with the requirements for RGMII support, and thus make the SSI 1328 work. Corentin Labbe has tested this on gemini-ssi1328 and gemini-ns2502. Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-05net: phy: marvell: use phy_write_paged() to set MSCRRussell King (Oracle)
Use phy_write_paged() in m88e1118_config_init() to set the MSCR value. We leave the other paged write for the LEDs in case the DT register parsing is relying on this page. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-05Revert "net: usb: r8152: Add MAC passthrough support for more Lenovo Docks"Aaron Ma
This reverts commit f77b83b5bbab53d2be339184838b19ed2c62c0a5. This change breaks multiple usb to ethernet dongles attached on Lenovo USB hub. Fixes: f77b83b5bbab ("net: usb: r8152: Add MAC passthrough support for more Lenovo Docks") Signed-off-by: Aaron Ma <aaron.ma@canonical.com> Link: https://lore.kernel.org/r/20220105155102.8557-1-aaron.ma@canonical.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-05Merge tag 'ieee802154-for-net-2022-01-05' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan Stefan Schmidt says: ==================== pull-request: ieee802154 for net 2022-01-05 Below I have a last minute fix for the atusb driver. Pavel fixes a KASAN uninit report for the driver. This version is the minimal impact fix to ease backporting. A bigger rework of the driver to avoid potential similar problems is ongoing and will come through net-next when ready. * tag 'ieee802154-for-net-2022-01-05' of git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan: ieee802154: atusb: fix uninit value in atusb_set_extended_addr ==================== Link: https://lore.kernel.org/r/20220105153914.512305-1-stefan@datenfreihafen.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-05mlxsw: pci: Avoid flow control for EMAD packetsDanielle Ratson
Locally generated packets ingress the device through its CPU port. When the CPU port is congested and there are not enough credits in its headroom buffer, packets can be dropped. While this might be acceptable for data packets that traverse the network, configuration packets exchanged between the host and the device (EMADs) should not be subjected to this flow control. The "sdq_lp" bit in the SDQ (Send Descriptor Queue) context allows the host to instruct the device to treat packets sent on this queue as "local processing" and always process them, regardless of the state of the CPU port's headroom. Add the definition of this bit and set it for the dedicated SDQ reserved for the transmission of EMAD packets. This makes the "local processing" bit in the WQE (Work Queue Element) redundant, so clear it. Signed-off-by: Danielle Ratson <danieller@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-05Merge tag 'linux-can-next-for-5.17-20220105' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2022-01-05 this is a pull request of 15 patches for net-next/master. The first patch is by me and removed an unused variable from the usb_8dev driver. Andy Shevchenko contributes a patch for the mcp251x driver, which removes an unneeded assignment. Jimmy Assarsson's patch for the kvaser_usb makes use of units.h in the assignment of frequencies. Lad Prabhakar provides 2 patches, converting the ti_hecc and the sja1000 driver to make use of platform_get_irq(). The 10 remaining patches are by Vincent Mailhol. First the etas_es58x driver populates the net_device::dev_port. The next 5 patches cleanup the handling of CAN error and CAN RTR messages of all drivers. The remaining 4 patches enhance the CAN controller mode flag handling and export it via netlink to user space. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>