summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
AgeCommit message (Collapse)Author
2023-10-14net/mlx5: Replace global mlx5_intf_lock with HCA devcom component lockShay Drory
mlx5_intf_lock is used to sync between LAG changes and its slaves mlx5 core dev aux devices changes, which means every time mlx5 core dev add/remove aux devices, mlx5 is taking this global lock, even if LAG functionality isn't supported over the core dev. This cause a bottleneck when probing VFs/SFs in parallel. Hence, replace mlx5_intf_lock with HCA devcom component lock, or no lock if LAG functionality isn't supported. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-10-14net/mlx5: Refactor LAG peer device lookout bus logic to mlx5 devcomShay Drory
LAG peer device lookout bus logic required the usage of global lock, mlx5_intf_mutex. As part of the effort to remove this global lock, refactor LAG peer device lookout to use mlx5 devcom layer. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-07-27net/mlx5: Devcom, Infrastructure changesRoi Dayan
Update devcom infrastructure to be more generic, without depending on max supported ports definition or a device guid, and also more encapsulated so callers don't need to pass the register devcom component id per event call. Signed-off-by: Eli Cohen <elic@nvidia.com> Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-07-27net/mlx5: Use shared code for checking lag is supportedRoi Dayan
Move shared function to check lag is supported to lag header file. Signed-off-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-23net/mlx5: Lag, Remove duplicate code checking lag is supportedRoi Dayan
Remove duplicate function for checking if device has lag support. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-07net/mlx5: Enable 4 ports VF LAGShay Drory
Now, after all preparation are done, enable 4 ports VF LAG Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-07net/mlx5: LAG, change mlx5_shared_fdb_supported() to staticShay Drory
mlx5_shared_fdb_supported() is used only in a single file. Change the function to be static. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-07net/mlx5: LAG, generalize handling of shared FDBShay Drory
Shared FDB handling is using the assumption that shared FDB can only be created from two devices. In order to support shared FDB of more than two devices, iterate over all LAG ports instead of hard coding only the first two LAG ports whenever handling shared FDB. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-07net/mlx5: LAG, check if all eswitches are paired for shared FDBShay Drory
Shared FDB LAG can only work if all eswitches are paired. Also, whenever two eswitches are paired, devcom is marked as ready. Therefore, in case of device with two eswitches, checking devcom was sufficient. However, this is not correct for device with more than two eswitches, which will be introduced in downstream patch. Hence, check all eswitches are paired explicitly. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-07{net/RDMA}/mlx5: introduce lag_for_each_peerShay Drory
Introduce a generic APIs to iterate over all the devices which are part of the LAG. This API replace mlx5_lag_get_peer_mdev() which retrieve only a single peer device from the lag. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-02net/mlx5: Devcom, Rename paired to readyShay Drory
In downstream patch devcom will provide support for more than two devices. The term 'paired' will be renamed as 'ready' to convey a more accurate meaning. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-06-02net/mlx5: E-switch, generalize shared FDB creationShay Drory
Shared FDB creation is hard coded for only two eswitches. Generalize shared FDB creation so that any number of eswitches could create shared FDB. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-14net/mlx5: Lag, Add single RDMA device in multiport modeMark Bloch
In MultiPort E-Switch mode a single RDMA is created. This device has multiple RDMA ports that represent the uplink ports that are connected to the E-Switch. Account for this when creating the RDMA device so it has an additional port for the non native uplink. As a side effect of this patch, use shared fdb in multiport eswitch mode. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-14net/mlx5: Lag, Control MultiPort E-Switch single FDB modeRoi Dayan
MultiPort E-Switch builds on newer hardware's capabilities and introduces a mode where a single E-Switch is used and all the vports and physical ports on the NIC are connected to it. The new mode will allow in the future a decrease in the memory used by the driver and advanced features that aren't possible today. This represents a big change in the current E-Switch implantation in mlx5. Currently, by default, each E-Switch manager manages its E-Switch. Steering rules in each E-Switch can only forward traffic to the native physical port associated with that E-Switch. While there are ways to target non-native physical ports, for example using a bond or via special TC rules. None of the ways allows a user to configure the driver to operate by default in such a mode nor can the driver decide to move to this mode by default as it's user configuration-driven right now. While MultiPort E-Switch single FDB mode is the preferred mode, older generations of ConnectX hardware couldn't support this mode so it was never implemented. Now that there is capable hardware present, start the transition to having this mode by default. Introduce a devlink parameter to control MultiPort E-Switch single FDB mode. This will allow users to select this mode on their system right now and in the future will allow the driver to move to this mode by default. Example: $ devlink dev param set pci/0000:00:0b.0 name esw_multiport value 1 \ cmode runtime Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-04net/mlx5: Lag, Use flag to check for shared FDB modeMark Bloch
It's redundant and incorrect to check lag is also sriov mode. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-02-04net/mlx5: Lag, Use mlx5_lag_dev() instead of derefering pointersRoi Dayan
Use the existing wrapper mlx5_lag_dev() to access the lag object from dev for better maintainability and consistent code. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-28net/mlx5: Lag, fix failure to cancel delayed bond workEli Cohen
Commit 0d4e8ed139d8 ("net/mlx5: Lag, avoid lockdep warnings") accidentally removed a call to cancel delayed bond work thus it may cause queued delay to expire and fall on an already destroyed work queue. Fix by restoring the call cancel_delayed_work_sync() before destroying the workqueue. This prevents call trace such as this: [ 329.230417] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 329.231444] #PF: supervisor write access in kernel mode [ 329.232233] #PF: error_code(0x0002) - not-present page [ 329.233007] PGD 0 P4D 0 [ 329.233476] Oops: 0002 [#1] SMP [ 329.234012] CPU: 5 PID: 145 Comm: kworker/u20:4 Tainted: G OE 6.0.0-rc5_mlnx #1 [ 329.235282] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [ 329.236868] Workqueue: mlx5_cmd_0000:08:00.1 cmd_work_handler [mlx5_core] [ 329.237886] RIP: 0010:_raw_spin_lock+0xc/0x20 [ 329.238585] Code: f0 0f b1 17 75 02 f3 c3 89 c6 e9 6f 3c 5f ff 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 0f 1f 44 00 00 31 c0 ba 01 00 00 00 <f0> 0f b1 17 75 02 f3 c3 89 c6 e9 45 3c 5f ff 0f 1f 44 00 00 0f 1f [ 329.241156] RSP: 0018:ffffc900001b0e98 EFLAGS: 00010046 [ 329.241940] RAX: 0000000000000000 RBX: ffffffff82374ae0 RCX: 0000000000000000 [ 329.242954] RDX: 0000000000000001 RSI: 0000000000000014 RDI: 0000000000000000 [ 329.243974] RBP: ffff888106ccf000 R08: ffff8881004000c8 R09: ffff888100400000 [ 329.244990] R10: 0000000000000000 R11: ffffffff826669f8 R12: 0000000000002000 [ 329.246009] R13: 0000000000000005 R14: ffff888100aa7ce0 R15: ffff88852ca80000 [ 329.247030] FS: 0000000000000000(0000) GS:ffff88852ca80000(0000) knlGS:0000000000000000 [ 329.248260] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 329.249111] CR2: 0000000000000000 CR3: 000000016d675001 CR4: 0000000000770ee0 [ 329.250133] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 329.251152] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 329.252176] PKRU: 55555554 Fixes: 0d4e8ed139d8 ("net/mlx5: Lag, avoid lockdep warnings") Signed-off-by: Eli Cohen <elic@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-11-29net/mlx5: Lag, Fix for loop when checking lagChris Mi
The cited commit adds a for loop to check if each port supports lag or not. But dev is not initialized correctly. Fix it by initializing dev for each iteration. Fixes: e87c6a832f88 ("net/mlx5: E-switch, Fix duplicate lag creation") Signed-off-by: Chris Mi <cmi@nvidia.com> Reported-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221129093006.378840-2-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-11-24net/mlx5: E-switch, Fix duplicate lag creationChris Mi
If creating bond first and then enabling sriov in switchdev mode, will hit the following syndrome: mlx5_core 0000:08:00.0: mlx5_cmd_out_err:778:(pid 25543): CREATE_LAG(0x840) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0x7d49cb), err(-22) The reason is because the offending patch removes eswitch mode none. In vf lag, the checking of eswitch mode none is replaced by checking if sriov is enabled. But when driver enables sriov, it triggers the bond workqueue task first and then setting sriov number in pci_enable_sriov(). So the check fails. Fix it by checking if sriov is enabled using eswitch internal counter that is set before triggering the bond workqueue task. Fixes: f019679ea5f2 ("net/mlx5: E-switch, Remove dependency between sriov and eswitch mode") Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-11-21net/mlx5: Lag, avoid lockdep warningsEli Cohen
ldev->lock is used to serialize lag change operations. Since multiport eswtich functionality was added, we now change the mode dynamically. However, acquiring ldev->lock is not allowed as it could possibly lead to a deadlock as reported by the lockdep mechanism. [ 836.154963] WARNING: possible circular locking dependency detected [ 836.155850] 5.19.0-rc5_net_56b7df2 #1 Not tainted [ 836.156549] ------------------------------------------------------ [ 836.157418] handler1/12198 is trying to acquire lock: [ 836.158178] ffff888187d52b58 (&ldev->lock){+.+.}-{3:3}, at: mlx5_lag_do_mirred+0x3b/0x70 [mlx5_core] [ 836.159575] [ 836.159575] but task is already holding lock: [ 836.160474] ffff8881d4de2930 (&block->cb_lock){++++}-{3:3}, at: tc_setup_cb_add+0x5b/0x200 [ 836.161669] which lock already depends on the new lock. [ 836.162905] [ 836.162905] the existing dependency chain (in reverse order) is: [ 836.164008] -> #3 (&block->cb_lock){++++}-{3:3}: [ 836.164946] down_write+0x25/0x60 [ 836.165548] tcf_block_get_ext+0x1c6/0x5d0 [ 836.166253] ingress_init+0x74/0xa0 [sch_ingress] [ 836.167028] qdisc_create.constprop.0+0x130/0x5e0 [ 836.167805] tc_modify_qdisc+0x481/0x9f0 [ 836.168490] rtnetlink_rcv_msg+0x16e/0x5a0 [ 836.169189] netlink_rcv_skb+0x4e/0xf0 [ 836.169861] netlink_unicast+0x190/0x250 [ 836.170543] netlink_sendmsg+0x243/0x4b0 [ 836.171226] sock_sendmsg+0x33/0x40 [ 836.171860] ____sys_sendmsg+0x1d1/0x1f0 [ 836.172535] ___sys_sendmsg+0xab/0xf0 [ 836.173183] __sys_sendmsg+0x51/0x90 [ 836.173836] do_syscall_64+0x3d/0x90 [ 836.174471] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 836.175282] [ 836.175282] -> #2 (rtnl_mutex){+.+.}-{3:3}: [ 836.176190] __mutex_lock+0x6b/0xf80 [ 836.176830] register_netdevice_notifier+0x21/0x120 [ 836.177631] rtnetlink_init+0x2d/0x1e9 [ 836.178289] netlink_proto_init+0x163/0x179 [ 836.178994] do_one_initcall+0x63/0x300 [ 836.179672] kernel_init_freeable+0x2cb/0x31b [ 836.180403] kernel_init+0x17/0x140 [ 836.181035] ret_from_fork+0x1f/0x30 [ 836.181687] -> #1 (pernet_ops_rwsem){+.+.}-{3:3}: [ 836.182628] down_write+0x25/0x60 [ 836.183235] unregister_netdevice_notifier+0x1c/0xb0 [ 836.184029] mlx5_ib_roce_cleanup+0x94/0x120 [mlx5_ib] [ 836.184855] __mlx5_ib_remove+0x35/0x60 [mlx5_ib] [ 836.185637] mlx5_eswitch_unregister_vport_reps+0x22f/0x440 [mlx5_core] [ 836.186698] auxiliary_bus_remove+0x18/0x30 [ 836.187409] device_release_driver_internal+0x1f6/0x270 [ 836.188253] bus_remove_device+0xef/0x160 [ 836.188939] device_del+0x18b/0x3f0 [ 836.189562] mlx5_rescan_drivers_locked+0xd6/0x2d0 [mlx5_core] [ 836.190516] mlx5_lag_remove_devices+0x69/0xe0 [mlx5_core] [ 836.191414] mlx5_do_bond_work+0x441/0x620 [mlx5_core] [ 836.192278] process_one_work+0x25c/0x590 [ 836.192963] worker_thread+0x4f/0x3d0 [ 836.193609] kthread+0xcb/0xf0 [ 836.194189] ret_from_fork+0x1f/0x30 [ 836.194826] -> #0 (&ldev->lock){+.+.}-{3:3}: [ 836.195734] __lock_acquire+0x15b8/0x2a10 [ 836.196426] lock_acquire+0xce/0x2d0 [ 836.197057] __mutex_lock+0x6b/0xf80 [ 836.197708] mlx5_lag_do_mirred+0x3b/0x70 [mlx5_core] [ 836.198575] tc_act_parse_mirred+0x25b/0x800 [mlx5_core] [ 836.199467] parse_tc_actions+0x168/0x5a0 [mlx5_core] [ 836.200340] __mlx5e_add_fdb_flow+0x263/0x480 [mlx5_core] [ 836.201241] mlx5e_configure_flower+0x8a0/0x1820 [mlx5_core] [ 836.202187] tc_setup_cb_add+0xd7/0x200 [ 836.202856] fl_hw_replace_filter+0x14c/0x1f0 [cls_flower] [ 836.203739] fl_change+0xbbe/0x1730 [cls_flower] [ 836.204501] tc_new_tfilter+0x407/0xd90 [ 836.205168] rtnetlink_rcv_msg+0x406/0x5a0 [ 836.205877] netlink_rcv_skb+0x4e/0xf0 [ 836.206535] netlink_unicast+0x190/0x250 [ 836.207217] netlink_sendmsg+0x243/0x4b0 [ 836.207915] sock_sendmsg+0x33/0x40 [ 836.208538] ____sys_sendmsg+0x1d1/0x1f0 [ 836.209219] ___sys_sendmsg+0xab/0xf0 [ 836.209878] __sys_sendmsg+0x51/0x90 [ 836.210510] do_syscall_64+0x3d/0x90 [ 836.211137] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 836.211954] other info that might help us debug this: [ 836.213174] Chain exists of: [ 836.213174] &ldev->lock --> rtnl_mutex --> &block->cb_lock 836.214650] Possible unsafe locking scenario: [ 836.214650] [ 836.215574] CPU0 CPU1 [ 836.216255] ---- ---- [ 836.216943] lock(&block->cb_lock); [ 836.217518] lock(rtnl_mutex); [ 836.218348] lock(&block->cb_lock); [ 836.219212] lock(&ldev->lock); [ 836.219758] [ 836.219758] *** DEADLOCK *** [ 836.219758] [ 836.220747] 2 locks held by handler1/12198: [ 836.221390] #0: ffff8881d4de2930 (&block->cb_lock){++++}-{3:3}, at: tc_setup_cb_add+0x5b/0x200 [ 836.222646] #1: ffff88810c9a92c0 (&esw->mode_lock){++++}-{3:3}, at: mlx5_esw_hold+0x39/0x50 [mlx5_core] [ 836.224063] stack backtrace: [ 836.224799] CPU: 6 PID: 12198 Comm: handler1 Not tainted 5.19.0-rc5_net_56b7df2 #1 [ 836.225923] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [ 836.227476] Call Trace: [ 836.227929] <TASK> [ 836.228332] dump_stack_lvl+0x57/0x7d [ 836.228924] check_noncircular+0x104/0x120 [ 836.229562] __lock_acquire+0x15b8/0x2a10 [ 836.230201] lock_acquire+0xce/0x2d0 [ 836.230776] ? mlx5_lag_do_mirred+0x3b/0x70 [mlx5_core] [ 836.231614] ? find_held_lock+0x2b/0x80 [ 836.232221] __mutex_lock+0x6b/0xf80 [ 836.232799] ? mlx5_lag_do_mirred+0x3b/0x70 [mlx5_core] [ 836.233636] ? mlx5_lag_do_mirred+0x3b/0x70 [mlx5_core] [ 836.234451] ? xa_load+0xc3/0x190 [ 836.234995] mlx5_lag_do_mirred+0x3b/0x70 [mlx5_core] [ 836.235803] tc_act_parse_mirred+0x25b/0x800 [mlx5_core] [ 836.236636] ? tc_act_can_offload_mirred+0x135/0x210 [mlx5_core] [ 836.237550] parse_tc_actions+0x168/0x5a0 [mlx5_core] [ 836.238364] __mlx5e_add_fdb_flow+0x263/0x480 [mlx5_core] [ 836.239202] mlx5e_configure_flower+0x8a0/0x1820 [mlx5_core] [ 836.240076] ? lock_acquire+0xce/0x2d0 [ 836.240668] ? tc_setup_cb_add+0x5b/0x200 [ 836.241294] tc_setup_cb_add+0xd7/0x200 [ 836.241917] fl_hw_replace_filter+0x14c/0x1f0 [cls_flower] [ 836.242709] fl_change+0xbbe/0x1730 [cls_flower] [ 836.243408] tc_new_tfilter+0x407/0xd90 [ 836.244043] ? tc_del_tfilter+0x880/0x880 [ 836.244672] rtnetlink_rcv_msg+0x406/0x5a0 [ 836.245310] ? netlink_deliver_tap+0x7a/0x4b0 [ 836.245991] ? if_nlmsg_stats_size+0x2b0/0x2b0 [ 836.246675] netlink_rcv_skb+0x4e/0xf0 [ 836.258046] netlink_unicast+0x190/0x250 [ 836.258669] netlink_sendmsg+0x243/0x4b0 [ 836.259288] sock_sendmsg+0x33/0x40 [ 836.259857] ____sys_sendmsg+0x1d1/0x1f0 [ 836.260473] ___sys_sendmsg+0xab/0xf0 [ 836.261064] ? lock_acquire+0xce/0x2d0 [ 836.261669] ? find_held_lock+0x2b/0x80 [ 836.262272] ? __fget_files+0xb9/0x190 [ 836.262871] ? __fget_files+0xd3/0x190 [ 836.263462] __sys_sendmsg+0x51/0x90 [ 836.264064] do_syscall_64+0x3d/0x90 [ 836.264652] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 836.265425] RIP: 0033:0x7fdbe5e2677d [ 836.266012] Code: 28 89 54 24 1c 48 89 74 24 10 89 7c 24 08 e8 ba ee ff ff 8b 54 24 1c 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 33 44 89 c7 48 89 44 24 08 e8 ee ee ff ff 48 [ 836.268485] RSP: 002b:00007fdbe48a75a0 EFLAGS: 00000293 ORIG_RAX: 000000000000002e [ 836.269598] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007fdbe5e2677d [ 836.270576] RDX: 0000000000000000 RSI: 00007fdbe48a7640 RDI: 000000000000003c [ 836.271565] RBP: 00007fdbe48a8368 R08: 0000000000000000 R09: 0000000000000000 [ 836.272546] R10: 00007fdbe48a84b0 R11: 0000000000000293 R12: 0000557bd17dc860 [ 836.273527] R13: 0000000000000000 R14: 0000557bd17dc860 R15: 00007fdbe48a7640 [ 836.274521] </TASK> To avoid using mode holding ldev->lock in the configure flow, we queue a work to the lag workqueue and cease wait on a completion object. In addition, we remove the lock from mlx5_lag_do_mirred() since it is not really protecting anything. It should be noted that an actual deadlock has not been observed. Signed-off-by: Eli Cohen <elic@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-27net/mlx5: Lag, enable hash mode by default for all NICsLiu, Changcheng
The firmware supports adding a steering rule to catch egress traffic of the QPs/TISs which are set port affinity explicitly in hash mode. Enable that mode for NICS with 2 ports as well. Signed-off-by: Liu, Changcheng <jerrliu@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-27net/mlx5: Lag, set active ports if support bypass port select flow tableLiu, Changcheng
active_port bit mask indicates the current active ports. Set bit indicates the port is active. Update active ports info to FW to redirect the QP/TIS from inactive ports to other ports. Signed-off-by: Liu, Changcheng <jerrliu@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-09-27RDMA/mlx5: Don't set tx affinity when lag is in hash modeLiu, Changcheng
In hash mode, without setting tx affinity explicitly, the port select flow table decides which port is used for the traffic. If port_select_flow_table_bypass capability is supported and tx affinity is set explicitly for QP/TIS, they will be added into the explicit affinity table in FW to check which port is used for the traffic. 1. The overloaded explicit affinity table may affect performance. To avoid this, do not set tx affinity explicitly by default. 2. The packets of the same flow need to be transmitted on the same port. Because the packets of the same flow use different QPs in slow & fast path, it shouldn't set tx affinity explicitly for these QPs. Signed-off-by: Liu, Changcheng <jerrliu@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-08-22net/mlx5: Disable irq when locking lag_lockVlad Buslov
The lag_lock is taken from both process and softirq contexts which results lockdep warning[0] about potential deadlock. However, just disabling softirqs by using *_bh spinlock API is not enough since it will cause warning in some contexts where the lock is obtained with hard irqs disabled. To fix the issue save current irq state, disable them before obtaining the lock an re-enable irqs from saved state after releasing it. [0]: [Sun Aug 7 13:12:29 2022] ================================ [Sun Aug 7 13:12:29 2022] WARNING: inconsistent lock state [Sun Aug 7 13:12:29 2022] 5.19.0_for_upstream_debug_2022_08_04_16_06 #1 Not tainted [Sun Aug 7 13:12:29 2022] -------------------------------- [Sun Aug 7 13:12:29 2022] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage. [Sun Aug 7 13:12:29 2022] swapper/0/0 [HC0[0]:SC1[1]:HE1:SE0] takes: [Sun Aug 7 13:12:29 2022] ffffffffa06dc0d8 (lag_lock){+.?.}-{2:2}, at: mlx5_lag_is_shared_fdb+0x1f/0x120 [mlx5_core] [Sun Aug 7 13:12:29 2022] {SOFTIRQ-ON-W} state was registered at: [Sun Aug 7 13:12:29 2022] lock_acquire+0x1c1/0x550 [Sun Aug 7 13:12:29 2022] _raw_spin_lock+0x2c/0x40 [Sun Aug 7 13:12:29 2022] mlx5_lag_add_netdev+0x13b/0x480 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5e_nic_enable+0x114/0x470 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5e_attach_netdev+0x30e/0x6a0 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5e_resume+0x105/0x160 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5e_probe+0xac3/0x14f0 [mlx5_core] [Sun Aug 7 13:12:29 2022] auxiliary_bus_probe+0x9d/0xe0 [Sun Aug 7 13:12:29 2022] really_probe+0x1e0/0xaa0 [Sun Aug 7 13:12:29 2022] __driver_probe_device+0x219/0x480 [Sun Aug 7 13:12:29 2022] driver_probe_device+0x49/0x130 [Sun Aug 7 13:12:29 2022] __driver_attach+0x1e4/0x4d0 [Sun Aug 7 13:12:29 2022] bus_for_each_dev+0x11e/0x1a0 [Sun Aug 7 13:12:29 2022] bus_add_driver+0x3f4/0x5a0 [Sun Aug 7 13:12:29 2022] driver_register+0x20f/0x390 [Sun Aug 7 13:12:29 2022] __auxiliary_driver_register+0x14e/0x260 [Sun Aug 7 13:12:29 2022] mlx5e_init+0x38/0x90 [mlx5_core] [Sun Aug 7 13:12:29 2022] vhost_iotlb_itree_augment_rotate+0xcb/0x180 [vhost_iotlb] [Sun Aug 7 13:12:29 2022] do_one_initcall+0xc4/0x400 [Sun Aug 7 13:12:29 2022] do_init_module+0x18a/0x620 [Sun Aug 7 13:12:29 2022] load_module+0x563a/0x7040 [Sun Aug 7 13:12:29 2022] __do_sys_finit_module+0x122/0x1d0 [Sun Aug 7 13:12:29 2022] do_syscall_64+0x3d/0x90 [Sun Aug 7 13:12:29 2022] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [Sun Aug 7 13:12:29 2022] irq event stamp: 3596508 [Sun Aug 7 13:12:29 2022] hardirqs last enabled at (3596508): [<ffffffff813687c2>] __local_bh_enable_ip+0xa2/0x100 [Sun Aug 7 13:12:29 2022] hardirqs last disabled at (3596507): [<ffffffff813687da>] __local_bh_enable_ip+0xba/0x100 [Sun Aug 7 13:12:29 2022] softirqs last enabled at (3596488): [<ffffffff81368a2a>] irq_exit_rcu+0x11a/0x170 [Sun Aug 7 13:12:29 2022] softirqs last disabled at (3596495): [<ffffffff81368a2a>] irq_exit_rcu+0x11a/0x170 [Sun Aug 7 13:12:29 2022] other info that might help us debug this: [Sun Aug 7 13:12:29 2022] Possible unsafe locking scenario: [Sun Aug 7 13:12:29 2022] CPU0 [Sun Aug 7 13:12:29 2022] ---- [Sun Aug 7 13:12:29 2022] lock(lag_lock); [Sun Aug 7 13:12:29 2022] <Interrupt> [Sun Aug 7 13:12:29 2022] lock(lag_lock); [Sun Aug 7 13:12:29 2022] *** DEADLOCK *** [Sun Aug 7 13:12:29 2022] 4 locks held by swapper/0/0: [Sun Aug 7 13:12:29 2022] #0: ffffffff84643260 (rcu_read_lock){....}-{1:2}, at: mlx5e_napi_poll+0x43/0x20a0 [mlx5_core] [Sun Aug 7 13:12:29 2022] #1: ffffffff84643260 (rcu_read_lock){....}-{1:2}, at: netif_receive_skb_list_internal+0x2d7/0xd60 [Sun Aug 7 13:12:29 2022] #2: ffff888144a18b58 (&br->hash_lock){+.-.}-{2:2}, at: br_fdb_update+0x301/0x570 [Sun Aug 7 13:12:29 2022] #3: ffffffff84643260 (rcu_read_lock){....}-{1:2}, at: atomic_notifier_call_chain+0x5/0x1d0 [Sun Aug 7 13:12:29 2022] stack backtrace: [Sun Aug 7 13:12:29 2022] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.19.0_for_upstream_debug_2022_08_04_16_06 #1 [Sun Aug 7 13:12:29 2022] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [Sun Aug 7 13:12:29 2022] Call Trace: [Sun Aug 7 13:12:29 2022] <IRQ> [Sun Aug 7 13:12:29 2022] dump_stack_lvl+0x57/0x7d [Sun Aug 7 13:12:29 2022] mark_lock.part.0.cold+0x5f/0x92 [Sun Aug 7 13:12:29 2022] ? lock_chain_count+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? unwind_next_frame+0x1c4/0x1b50 [Sun Aug 7 13:12:29 2022] ? secondary_startup_64_no_verify+0xcd/0xdb [Sun Aug 7 13:12:29 2022] ? mlx5e_napi_poll+0x4e9/0x20a0 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? mlx5e_napi_poll+0x4e9/0x20a0 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? stack_access_ok+0x1d0/0x1d0 [Sun Aug 7 13:12:29 2022] ? start_kernel+0x3a7/0x3c5 [Sun Aug 7 13:12:29 2022] __lock_acquire+0x1260/0x6720 [Sun Aug 7 13:12:29 2022] ? lock_chain_count+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? lock_chain_count+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? register_lock_class+0x1880/0x1880 [Sun Aug 7 13:12:29 2022] ? mark_lock.part.0+0xed/0x3060 [Sun Aug 7 13:12:29 2022] ? stack_trace_save+0x91/0xc0 [Sun Aug 7 13:12:29 2022] lock_acquire+0x1c1/0x550 [Sun Aug 7 13:12:29 2022] ? mlx5_lag_is_shared_fdb+0x1f/0x120 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? lockdep_hardirqs_on_prepare+0x400/0x400 [Sun Aug 7 13:12:29 2022] ? __lock_acquire+0xd6f/0x6720 [Sun Aug 7 13:12:29 2022] _raw_spin_lock+0x2c/0x40 [Sun Aug 7 13:12:29 2022] ? mlx5_lag_is_shared_fdb+0x1f/0x120 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5_lag_is_shared_fdb+0x1f/0x120 [mlx5_core] [Sun Aug 7 13:12:29 2022] mlx5_esw_bridge_rep_vport_num_vhca_id_get+0x1a0/0x600 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? mlx5_esw_bridge_update_work+0x90/0x90 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? lock_acquire+0x1c1/0x550 [Sun Aug 7 13:12:29 2022] mlx5_esw_bridge_switchdev_event+0x185/0x8f0 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? mlx5_esw_bridge_port_obj_attr_set+0x3e0/0x3e0 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? check_chain_key+0x24a/0x580 [Sun Aug 7 13:12:29 2022] atomic_notifier_call_chain+0xd7/0x1d0 [Sun Aug 7 13:12:29 2022] br_switchdev_fdb_notify+0xea/0x100 [Sun Aug 7 13:12:29 2022] ? br_switchdev_set_port_flag+0x310/0x310 [Sun Aug 7 13:12:29 2022] fdb_notify+0x11b/0x150 [Sun Aug 7 13:12:29 2022] br_fdb_update+0x34c/0x570 [Sun Aug 7 13:12:29 2022] ? lock_chain_count+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? br_fdb_add_local+0x50/0x50 [Sun Aug 7 13:12:29 2022] ? br_allowed_ingress+0x5f/0x1070 [Sun Aug 7 13:12:29 2022] ? check_chain_key+0x24a/0x580 [Sun Aug 7 13:12:29 2022] br_handle_frame_finish+0x786/0x18e0 [Sun Aug 7 13:12:29 2022] ? check_chain_key+0x24a/0x580 [Sun Aug 7 13:12:29 2022] ? br_handle_local_finish+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? __lock_acquire+0xd6f/0x6720 [Sun Aug 7 13:12:29 2022] ? sctp_inet_bind_verify+0x4d/0x190 [Sun Aug 7 13:12:29 2022] ? xlog_unpack_data+0x2e0/0x310 [Sun Aug 7 13:12:29 2022] ? br_handle_local_finish+0x20/0x20 [Sun Aug 7 13:12:29 2022] br_nf_hook_thresh+0x227/0x380 [br_netfilter] [Sun Aug 7 13:12:29 2022] ? setup_pre_routing+0x460/0x460 [br_netfilter] [Sun Aug 7 13:12:29 2022] ? br_handle_local_finish+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? br_nf_pre_routing_ipv6+0x48b/0x69c [br_netfilter] [Sun Aug 7 13:12:29 2022] br_nf_pre_routing_finish_ipv6+0x5c2/0xbf0 [br_netfilter] [Sun Aug 7 13:12:29 2022] ? br_handle_local_finish+0x20/0x20 [Sun Aug 7 13:12:29 2022] br_nf_pre_routing_ipv6+0x4c6/0x69c [br_netfilter] [Sun Aug 7 13:12:29 2022] ? br_validate_ipv6+0x9e0/0x9e0 [br_netfilter] [Sun Aug 7 13:12:29 2022] ? br_nf_forward_arp+0xb70/0xb70 [br_netfilter] [Sun Aug 7 13:12:29 2022] ? br_nf_pre_routing+0xacf/0x1160 [br_netfilter] [Sun Aug 7 13:12:29 2022] br_handle_frame+0x8a9/0x1270 [Sun Aug 7 13:12:29 2022] ? br_handle_frame_finish+0x18e0/0x18e0 [Sun Aug 7 13:12:29 2022] ? register_lock_class+0x1880/0x1880 [Sun Aug 7 13:12:29 2022] ? br_handle_local_finish+0x20/0x20 [Sun Aug 7 13:12:29 2022] ? bond_handle_frame+0xf9/0xac0 [bonding] [Sun Aug 7 13:12:29 2022] ? br_handle_frame_finish+0x18e0/0x18e0 [Sun Aug 7 13:12:29 2022] __netif_receive_skb_core+0x7c0/0x2c70 [Sun Aug 7 13:12:29 2022] ? check_chain_key+0x24a/0x580 [Sun Aug 7 13:12:29 2022] ? generic_xdp_tx+0x5b0/0x5b0 [Sun Aug 7 13:12:29 2022] ? __lock_acquire+0xd6f/0x6720 [Sun Aug 7 13:12:29 2022] ? register_lock_class+0x1880/0x1880 [Sun Aug 7 13:12:29 2022] ? check_chain_key+0x24a/0x580 [Sun Aug 7 13:12:29 2022] __netif_receive_skb_list_core+0x2d7/0x8a0 [Sun Aug 7 13:12:29 2022] ? lock_acquire+0x1c1/0x550 [Sun Aug 7 13:12:29 2022] ? process_backlog+0x960/0x960 [Sun Aug 7 13:12:29 2022] ? lockdep_hardirqs_on_prepare+0x129/0x400 [Sun Aug 7 13:12:29 2022] ? kvm_clock_get_cycles+0x14/0x20 [Sun Aug 7 13:12:29 2022] netif_receive_skb_list_internal+0x5f4/0xd60 [Sun Aug 7 13:12:29 2022] ? do_xdp_generic+0x150/0x150 [Sun Aug 7 13:12:29 2022] ? mlx5e_poll_rx_cq+0xf6b/0x2960 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? mlx5e_poll_ico_cq+0x3d/0x1590 [mlx5_core] [Sun Aug 7 13:12:29 2022] napi_complete_done+0x188/0x710 [Sun Aug 7 13:12:29 2022] mlx5e_napi_poll+0x4e9/0x20a0 [mlx5_core] [Sun Aug 7 13:12:29 2022] ? __queue_work+0x53c/0xeb0 [Sun Aug 7 13:12:29 2022] __napi_poll+0x9f/0x540 [Sun Aug 7 13:12:29 2022] net_rx_action+0x420/0xb70 [Sun Aug 7 13:12:29 2022] ? napi_threaded_poll+0x470/0x470 [Sun Aug 7 13:12:29 2022] ? __common_interrupt+0x79/0x1a0 [Sun Aug 7 13:12:29 2022] __do_softirq+0x271/0x92c [Sun Aug 7 13:12:29 2022] irq_exit_rcu+0x11a/0x170 [Sun Aug 7 13:12:29 2022] common_interrupt+0x7d/0xa0 [Sun Aug 7 13:12:29 2022] </IRQ> [Sun Aug 7 13:12:29 2022] <TASK> [Sun Aug 7 13:12:29 2022] asm_common_interrupt+0x22/0x40 [Sun Aug 7 13:12:29 2022] RIP: 0010:default_idle+0x42/0x60 [Sun Aug 7 13:12:29 2022] Code: c1 83 e0 07 48 c1 e9 03 83 c0 03 0f b6 14 11 38 d0 7c 04 84 d2 75 14 8b 05 6b f1 22 02 85 c0 7e 07 0f 00 2d 80 3b 4a 00 fb f4 <c3> 48 c7 c7 e0 07 7e 85 e8 21 bd 40 fe eb de 66 66 2e 0f 1f 84 00 [Sun Aug 7 13:12:29 2022] RSP: 0018:ffffffff84407e18 EFLAGS: 00000242 [Sun Aug 7 13:12:29 2022] RAX: 0000000000000001 RBX: ffffffff84ec4a68 RCX: 1ffffffff0afc0fc [Sun Aug 7 13:12:29 2022] RDX: 0000000000000004 RSI: 0000000000000000 RDI: ffffffff835b1fac [Sun Aug 7 13:12:29 2022] RBP: 0000000000000000 R08: 0000000000000001 R09: ffff8884d2c44ac3 [Sun Aug 7 13:12:29 2022] R10: ffffed109a588958 R11: 00000000ffffffff R12: 0000000000000000 [Sun Aug 7 13:12:29 2022] R13: ffffffff84efac20 R14: 0000000000000000 R15: dffffc0000000000 [Sun Aug 7 13:12:29 2022] ? default_idle_call+0xcc/0x460 [Sun Aug 7 13:12:29 2022] default_idle_call+0xec/0x460 [Sun Aug 7 13:12:29 2022] do_idle+0x394/0x450 [Sun Aug 7 13:12:29 2022] ? arch_cpu_idle_exit+0x40/0x40 [Sun Aug 7 13:12:29 2022] cpu_startup_entry+0x19/0x20 [Sun Aug 7 13:12:29 2022] rest_init+0x156/0x250 [Sun Aug 7 13:12:29 2022] arch_call_rest_init+0xf/0x15 [Sun Aug 7 13:12:29 2022] start_kernel+0x3a7/0x3c5 [Sun Aug 7 13:12:29 2022] secondary_startup_64_no_verify+0xcd/0xdb [Sun Aug 7 13:12:29 2022] </TASK> Fixes: ff9b7521468b ("net/mlx5: Bridge, support LAG") Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-08-22net/mlx5: LAG, fix logic over MLX5_LAG_FLAG_NDEVS_READYEli Cohen
Only set MLX5_LAG_FLAG_NDEVS_READY if both netdevices are registered. Doing so guarantees that both ldev->pf[MLX5_LAG_P0].dev and ldev->pf[MLX5_LAG_P1].dev have valid pointers when MLX5_LAG_FLAG_NDEVS_READY is set. The core issue is asymmetry in setting MLX5_LAG_FLAG_NDEVS_READY and clearing it. Setting it is done wrongly when both ldev->pf[MLX5_LAG_P0].dev and ldev->pf[MLX5_LAG_P1].dev are set; clearing it is done right when either of ldev->pf[i].netdev is cleared. Consider the following scenario: 1. PF0 loads and sets ldev->pf[MLX5_LAG_P0].dev to a valid pointer 2. PF1 loads and sets both ldev->pf[MLX5_LAG_P1].dev and ldev->pf[MLX5_LAG_P1].netdev with valid pointers. This results in MLX5_LAG_FLAG_NDEVS_READY is set. 3. PF0 is unloaded before setting dev->pf[MLX5_LAG_P0].netdev. MLX5_LAG_FLAG_NDEVS_READY remains set. Further execution of mlx5_do_bond() will result in null pointer dereference when calling mlx5_lag_is_multipath() This patch fixes the following call trace actually encountered: [ 1293.475195] BUG: kernel NULL pointer dereference, address: 00000000000009a8 [ 1293.478756] #PF: supervisor read access in kernel mode [ 1293.481320] #PF: error_code(0x0000) - not-present page [ 1293.483686] PGD 0 P4D 0 [ 1293.484434] Oops: 0000 [#1] SMP PTI [ 1293.485377] CPU: 1 PID: 23690 Comm: kworker/u16:2 Not tainted 5.18.0-rc5_for_upstream_min_debug_2022_05_05_10_13 #1 [ 1293.488039] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [ 1293.490836] Workqueue: mlx5_lag mlx5_do_bond_work [mlx5_core] [ 1293.492448] RIP: 0010:mlx5_lag_is_multipath+0x5/0x50 [mlx5_core] [ 1293.494044] Code: e8 70 40 ff e0 48 8b 14 24 48 83 05 5c 1a 1b 00 01 e9 19 ff ff ff 48 83 05 47 1a 1b 00 01 eb d7 0f 1f 44 00 00 0f 1f 44 00 00 <48> 8b 87 a8 09 00 00 48 85 c0 74 26 48 83 05 a7 1b 1b 00 01 41 b8 [ 1293.498673] RSP: 0018:ffff88811b2fbe40 EFLAGS: 00010202 [ 1293.500152] RAX: ffff88818a94e1c0 RBX: ffff888165eca6c0 RCX: 0000000000000000 [ 1293.501841] RDX: 0000000000000001 RSI: ffff88818a94e1c0 RDI: 0000000000000000 [ 1293.503585] RBP: 0000000000000000 R08: ffff888119886740 R09: ffff888165eca73c [ 1293.505286] R10: 0000000000000018 R11: 0000000000000018 R12: ffff88818a94e1c0 [ 1293.506979] R13: ffff888112729800 R14: 0000000000000000 R15: ffff888112729858 [ 1293.508753] FS: 0000000000000000(0000) GS:ffff88852cc40000(0000) knlGS:0000000000000000 [ 1293.510782] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1293.512265] CR2: 00000000000009a8 CR3: 00000001032d4002 CR4: 0000000000370ea0 [ 1293.514001] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 1293.515806] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Fixes: 8a66e4585979 ("net/mlx5: Change ownership model for lag") Signed-off-by: Eli Cohen <elic@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-07-14Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
include/net/sock.h 310731e2f161 ("net: Fix data-races around sysctl_mem.") e70f3c701276 ("Revert "net: set SK_MEM_QUANTUM to 4096"") https://lore.kernel.org/all/20220711120211.7c8b7cba@canb.auug.org.au/ net/ipv4/fib_semantics.c 747c14307214 ("ip: fix dflt addr selection for connected nexthop") d62607c3fe45 ("net: rename reference+tracking helpers") net/tls/tls.h include/net/tls.h 3d8c51b25a23 ("net/tls: Check for errors in tls_device_init") 587903142308 ("tls: create an internal header") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-07-06net/mlx5: Lag, correct get the port select mode strLiu, Changcheng
mode & mode_flags is updated at the end of mlx5_activate_lag which may not reflect the actual mode as shown in below logic: mlx5_activate_lag(struct mlx5_lag *ldev, |-- unsigned long flags = 0; |-- err = mlx5_lag_set_flags(ldev, mode, tracker, shared_fdb, &flags); |-- err = mlx5_create_lag(ldev, tracker, mode, flags); |-- mlx5_get_str_port_sel_mode(ldev); |-- ldev->mode = mode; |-- ldev->mode_flags = flags; Use mode & flag as parameters to get port select mode info. Fixes: 94db33177819 ("net/mlx5: Support multiport eswitch mode") Signed-off-by: Liu, Changcheng <jerrliu@nvidia.com> Reviewed-by: Eli Cohen <elic@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-07-06net/mlx5: Lag, decouple FDB selection and shared FDBMark Bloch
Multiport eswitch is required to use native FDB selection instead of affinity, This was achieved by passing the shared_fdb flag down the HW lag creation path. While it did accomplish the goal of setting FDB selection mode to native, it had the side effect of also creating a shared FDB configuration. This created a few issues: - TC rules are inserted into a non active FDB, which means traffic isn't offloaded as all traffic will reach only a single FDB. - All wire traffic is treated as if a single physical port received it; while this is true for a bond configuration, this shouldn't be the case for multiport eswitch. Create a new flag MLX5_LAG_MODE_FLAG_FDB_SEL_MODE_NATIVE to indicate what FDB selection mode should be used. Fixes: 94db33177819 ("net/mlx5: Support multiport eswitch mode") Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Eli Cohen <elic@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-07-02net/mlx5: E-switch, Remove dependency between sriov and eswitch modeChris Mi
Currently, there are three eswitch modes, none, legacy and switchdev. None is the default mode. Remove redundant none mode as eswitch mode should always be either legacy mode or switchdev mode. With this patch, there are two behavior changes: 1. Legacy becomes the default mode. When querying eswitch mode using devlink, a valid mode is always returned. 2. When disabling sriov, the eswitch mode will not change, only vfs are unloaded. Signed-off-by: Chris Mi <cmi@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-06-09mellanox: mlx5: avoid uninitialized variable warning with gcc-12Linus Torvalds
gcc-12 started warning about 'tracker' being used uninitialized: drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c: In function ‘mlx5_do_bond’: drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c:786:28: warning: ‘tracker’ is used uninitialized [-Wuninitialized] 786 | struct lag_tracker tracker; | ^~~~~~~ which seems to be because it doesn't track how the use (and initialization) is bound by the 'do_bond' flag. But admittedly that 'do_bond' usage is fairly complicated, and involves passing it around as an argument to helper functions, so it's somewhat understandable that gcc doesn't see how that all works. This function could be rewritten to make the use of that tracker variable more obviously safe, but for now I'm just adding the forced initialization of it. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-05-17net/mlx5: Support multiport eswitch modeEli Cohen
Multiport eswitch mode is a LAG mode that allows to add rules that forward traffic to a specific physical port without being affected by LAG affinity configuration. This mode of operation is mutual exclusive with the other LAG modes used by multipath and bonding. To make the transition between the modes, we maintain a counter on the number of rules specifying one of the uplink representors as the target of mirred egress redirect action. An example of such rule would be: $ tc filter add dev enp8s0f0_0 prot all root flower dst_mac \ 00:11:22:33:44:55 action mirred egress redirect dev enp8s0f0 If the reference count just grows to one and LAG is not in use, we create the LAG in multiport eswitch mode. Other mode changes are not allowed while in this mode. When the reference count reaches zero, we destroy the LAG and let other modes be used if needed. logic also changed such that if forwarding to some uplink destination cannot be guaranteed, we fail the operation so the rule will eventually be in software and not in hardware. Signed-off-by: Eli Cohen <elic@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-17net/mlx5: Remove unused argumentEli Cohen
Argument ndev is not used in mlx5_handle_changeupper_event() Remove it. Signed-off-by: Eli Cohen <elic@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-17net/mlx5: Lag, refactor lag state machineEli Cohen
LAG state machine is implemented using bit flags. However, all these bit flags, except for MLX5_LAG_FLAG_HASH_BASED, are really mutual exclusive. In addition, MLX5_LAG_FLAG_READY is used by bonding to mark if we have our netdevices successfully added to lag and does not really belong in the same flags variable as the other flags. Rename MLX5_LAG_FLAG_READY to MLX5_LAG_FLAG_NDEVS_READY to better reflect its purpose and put it in a new flags variable. For the rest of the flags, we introduce a mode enum to hold the state of the LAG. Remove the shared fdb boolean flag from struct mlx5_lag and store this configuration as a mode flag. Change all flag related operations to use standard Linux APIs. Signed-off-by: Eli Cohen <elic@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, add debugfs to query hardware lag stateMark Bloch
Lag state has become very complicated with many modes, flags, types and port selections methods and future work will add additional features. Add a debugfs to query the current lag state. A new directory named "lag" will be created under the mlx5 debugfs directory. As the driver has debugfs per pci function the location will be: <debugfs>/mlx5/<BDF>/lag For example: /sys/kernel/debug/mlx5/0000:08:00.0/lag The following files are exposed: - state: Returns "active" or "disabled". If "active" it means hardware lag is active. - members: Returns the BDFs of all the members of lag object. - type: Returns the type of the lag currently configured. Valid only if hardware lag is active. * "roce" - Members are bare metal PFs. * "switchdev" - Members are in switchdev mode. * "multipath" - ECMP offloads. - port_sel_mode: Returns the egress port selection method, valid only if hardware lag is active. * "queue_affinity" - Egress port is selected by the QP/SQ affinity. * "hash" - Egress port is selected by hash done on each packet. Controlled by: xmit_hash_policy of the bond device. - flags: Returns flags that are specific per lag @type. Valid only if hardware lag is active. * "shared_fdb" - "on" or "off", if "on" single FDB is used. - mapping: Returns the mapping which is used to select egress port. Valid only if hardware lag is active. If @port_sel_mode is "hash" returns the active egress ports. The hash result will select only active ports. if @port_sel_mode is "queue_affinity" returns the mapping between the configured port affinity of the QP/SQ and actual egress port. For example: * 1:1 - Mapping means if the configured affinity is port 1 traffic will egress via port 1. * 1:2 - Mapping means if the configured affinity is port 1 traffic will egress via port 2. This can happen if port 1 is down or in active/backup mode and port 1 is backup. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, use buckets in hash modeMark Bloch
When in hardware lag and the NIC has more than 2 ports when one port goes down need to distribute the traffic between the remaining active ports. For better spread in such cases instead of using 1-to-1 mapping and only 4 slots in the hash, use many. Each port will have many slots that point to it. When a port goes down go over all the slots that pointed to that port and spread them between the remaining active ports. Once the port comes back restore the default mapping. We will have number_of_ports * MLX5_LAG_MAX_HASH_BUCKETS slots. Each MLX5_LAG_MAX_HASH_BUCKETS belong to a different port. The native mapping is such that: port 1: The first MLX5_LAG_MAX_HASH_BUCKETS slots are: [1, 1, .., 1] which means if a packet is hased into one of this slots it will hit the wire via port 1. port 2: The second MLX5_LAG_MAX_HASH_BUCKETS slots are: [2, 2, .., 2] which means if a packet is hased into one of this slots it will hit the wire via port2. and this mapping is the same of the rest of the ports. On a failover, lets say port 2 goes down (port 1, 3, 4 are still up). the new mapping for port 2 will be: port 2: The second MLX5_LAG_MAX_HASH_BUCKETS are: [1, 3, 1, 4, .., 4] which means the mapping was changed from the native mapping to a mapping that consists of only the active ports. With this if a port goes down the traffic will be split between the active ports randomly Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, refactor dmesg printMark Bloch
Combine dmesg lag prints into a single function. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Support devices with more than 2 portsMark Bloch
Increase the define MLX5_MAX_PORTS to 4 as the driver is ready to support NICs with 4 ports. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, use actual number of lag portsMark Bloch
Refactor the entire lag code to use ldev->ports instead of hard-coded defines (like MLX5_MAX_PORTS) for its operations. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, use hash when in roce lag on 4 portsMark Bloch
Downstream patches will add support for lag over 4 ports. In that mode we will only use hash as the uplink selection method. Using hash instead of queue affinity (before this patch) offers key advantages like: - Align ports selection method with the method used by the bond device - Better packets distribution where a single queue can transmit from multiple ports (with queue affinity a queue is bound to a single port regardless of the packet being sent). - In case of failover we traffic is split between multiple ports and not a single one like in queue affinity. Going forward it was decided that queue affinity will be deprecated as using hash provides a better user experience which means on 4 ports HCAs hash will always be used. Future work will add hash support for 2 ports HCAs as well. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, support single FDB only on 2 portsMark Bloch
E-Switch currently doesn't support more than 2 E-Switch managers being aggregated under a single hardware lag. Have specific checks to disallow creating lag when the code doesn't support it. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, store number of ports inside lag objectMark Bloch
Store the number of lag ports inside the lag object. Lag object is a single shared object managing the lag state of multiple mlx5 devices on the same physical HCA. Downstream patches will allow hardware lag to be created over devices with more than 2 ports. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, filter non compatible devicesMark Bloch
When search for a peer lag device we can filter based on that device's capabilities. Downstream patch will be less strict when filtering compatible devices and remove the limitation where we require exact MLX5_MAX_PORTS and change it to a range. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, use lag lockMark Bloch
Use a lag specific lock instead of depending on external locks to synchronise the lag creation/destruction. With this, taking E-Switch mode lock is no longer needed for syncing lag logic. Cleanup any dead code that is left over and don't export functions that aren't used outside the E-Switch core code. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, move E-Switch prerequisite check into lag codeMark Bloch
There is no need to expose E-Switch function for something that can be checked with already present API inside lag code. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-05-09net/mlx5: Lag, expose number of lag portsMark Bloch
Downstream patches will add support for hardware lag with more than 2 ports. Add a way for users to query the number of lag ports. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-02-23net/mlx5: Lag, offload active-backup drops to hardwareMark Bloch
In active-backup mode the backup interface's packets are dropped by the bond device. In switchdev where TC rules are offloaded to the FDB this can lead to packets being hit in the FDB where without offload they would have been dropped before reaching TC rules in the kernel. Create a drop rule to make sure packets on inactive ports are dropped before reaching the FDB. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-02-23net/mlx5: Lag, record inactive state of bond deviceMark Bloch
A bond device will drop duplicate packets (received on inactive ports) by default. A flag (all_slaves_active) can be set to override such behaviour. This flag is a global flag per bond device (ALB mode isn't supported by mlx5 driver so it can be ignored) When NETDEV_CHANGEUPPER / NETDEV_CHANGEINFODATA event is received check if there is an interface that is inactive. Downstream patch will use this information in order to decide if a drop rule is needed. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-02-23net/mlx5: Lag, don't use magic numbers for portsMark Bloch
Instead of using 1 & 2 as the ports numbers use an enum value. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-02-23net/mlx5: Lag, use local variable already defined to access E-SwitchMark Bloch
Use the local variable for dev0 (and add from dev1) instead of using the devices stored in the ldev structure. Makes the code easier to read. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-11-16net/mlx5: Lag, update tracker when state change event receivedMaher Sanalla
Currently, In NETDEV_CHANGELOWERSTATE/NETDEV_CHANGEUPPERSTATE events handling, tracking is not fully completed if the LAG device is not ready at the time the events occur. But, we must keep track of the upper and lower states after receiving the events because RoCE needs this info in mlx5_lag_get_roce_netdev() - in order to return the corresponding port that its running on. Returning the wrong (not most recent) port will lead to gids table being incorrect. For example: If during the attachment of a slave to the bond, the other non-attached port performs pci_reload, then the LAG device is not ready, but that should not result in dismissing attached slave tracker update automatically (which is performed in mlx5_handle_changelowerstate()), Since these events might not come later, which can lead to both bond ports having tx_enabled=0 - which is not a valid state of LAG bond. Fixes: 9b412cc35f00 ("net/mlx5e: Add LAG warning if bond slave is not lag master") Signed-off-by: Maher Sanalla <msanalla@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>