summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet
AgeCommit message (Collapse)Author
2021-08-12Merge tag 'mlx5-updates-2021-08-11' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5 updates 2021-08-11 This series provides misc updates to mlx5. For more information please see tag log below. Please pull and let me know if there is any problem. mlx5-updates-2021-08-11 Misc. cleanup for mlx5. 1) Typos and use of netdev_warn() 2) smatch cleanup 3) Minor fix to inner TTC table creation 4) Dynamic capability cache allocation ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-11net: hns3: add support for triggering reset by ethtoolYufeng Mo
Currently, four reset types are supported for the HNS3 ethernet driver: IMP reset, global reset, function reset, and FLR. Only FLR can now be triggered by the user. To restore the device when an exception occurs, add support for triggering reset by ethtool. Run the "ethtool --reset DEVNAME mgmt | all | dedicated" to trigger the IMP | global | function reset manually. In addition, VF can only trigger function reset. Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Link: https://lore.kernel.org/r/1628602128-15640-1-git-send-email-huangguangbin2@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-11net: mscc: Fix non-GPL export of regmap APIsMark Brown
The ocelot driver makes use of regmap, wrapping it with driver specific operations that are thin wrappers around the core regmap APIs. These are exported with EXPORT_SYMBOL, dropping the _GPL from the core regmap exports which is frowned upon. Add _GPL suffixes to at least the APIs that are doing register I/O. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Link: https://lore.kernel.org/r/20210810123748.47871-1-broonie@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-11net/mlx5e: Make use of netdev_warn()Cai Huoqing
to replace printk(KERN_WARNING ...) with netdev_warn() kindly Signed-off-by: Cai Huoqing <caihuoqing@baidu.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Fix variable type to match 64bitEran Ben Elisha
Fix the following smatch warning: wait_func_handle_exec_timeout() warn: should '1 << ent->idx' be a 64 bit type? Use 1ULL, to have a 64 bit type variable. Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Eran Ben Elisha <eranbe@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Initialize numa node for all core devicesParav Pandit
Subsequent patches make use of numa node affinity for memory allocations. Initialize it for PCI PF, VF and SF devices. Signed-off-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Allocate individual capabilityParav Pandit
Currently mlx5_core_dev contains array of capabilities. It contains 19 valid capabilities of the device, 2 reserved entries and 12 holes. Due to this for 14 unused entries, mlx5_core_dev allocates 14 * 8K = 112K bytes of memory which is never used. Due to this mlx5_core_dev structure size is 270Kbytes odd. This allocation further aligns to next power of 2 to 512Kbytes. By skipping non-existent entries, (a) 112Kbyte is saved, (b) mlx5_core_dev reduces to 8KB with alignment (c) 350KB saved in alignment In future individual capability allocation can be used to skip its allocation when such capability is disabled at the device level. This patch prepares mlx5_core_dev to hold capability using a pointer instead of inline array. Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Reorganize current and maximal capabilities to be per-typeParav Pandit
In the current code, the current and maximal capabilities are maintained in separate arrays which are both per type. In order to allow the creation of such a basic structure as a dynamically allocated array, we move curr and max fields to a unified structure so that specific capabilities can be allocated as one unit. Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: SF, use recent sysfs apiParav Pandit
Use sysfs_emit() which is aware of PAGE_SIZE buffer. Signed-off-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Refcount mlx5_irq with integerShay Drory
Currently, all access to mlx5 IRQs are done undere a lock. Hance, there isn't a reason to have kref in struct mlx5_irq. Switch it to integer. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Change SF missing dedicated MSI-X err message to dbgShay Drory
When MSI-X vectors allocated are not enough for SFs to have dedicated, MSI-X, kernel log buffer has too many entries. Hence only enable such log with debug level. Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Align mlx5_irq structureShay Drory
mlx5_irq structure have holes due to incorrect position of fields in it. Make them naturally align. pahole output after alignment: struct mlx5_irq { struct atomic_notifier_head nh; /* 0 72 */ /* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */ cpumask_var_t mask; /* 72 8 */ char name[32]; /* 80 32 */ struct mlx5_irq_pool * pool; /* 112 8 */ struct kref kref; /* 120 4 */ u32 index; /* 124 4 */ /* --- cacheline 2 boundary (128 bytes) --- */ int irqn; /* 128 4 */ /* size: 136, cachelines: 3, members: 7 */ /* padding: 4 */ /* last cacheline: 8 bytes */ }; Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Parav Pandit <parav@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Delete impossible dev->state checksLeon Romanovsky
New mlx5_core device structure is allocated through devlink_alloc with\ kzalloc and that ensures that all fields are equal to zero and it includes ->state too. That means that checks of that field in the mlx5_init_one() is completely redundant, because that function is called only once in the begging of mlx5_core_dev lifetime. PCI: .probe() -> probe_one() -> mlx5_init_one() The recovery flow can't run at that time or before it, because relevant work initialized later in mlx5_init_once(). Such initialization flow ensures that dev->state can't be MLX5_DEVICE_STATE_UNINITIALIZED at all, so remove such impossible checks. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Fix inner TTC table creationMaor Gottlieb
Fix typo of the cited commit that calls to mlx5_create_ttc_table, instead of mlx5_create_inner_ttc_table. Fixes: f4b45940e9b9 ("net/mlx5: Embed mlx5_ttc_table") Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Fix typo in commentsCai Huoqing
Fix typo: *vectores ==> vectors *realeased ==> released *erros ==> errors *namepsace ==> namespace *trafic ==> traffic *proccessed ==> processed *retore ==> restore *Currenlty ==> Currently *crated ==> created *chane ==> change *cannnot ==> cannot *usuallly ==> usually *failes ==> fails *importent ==> important *reenabled ==> re-enabled *alocation ==> allocation *recived ==> received *tanslation ==> translation Signed-off-by: Cai Huoqing <caihuoqing@baidu.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-11net/mlx5: Support enable_vnet devlink dev paramParav Pandit
Enable user to disable VDPA net auxiliary device so that when it is not required, user can disable it. For example, $ devlink dev param set pci/0000:06:00.0 \ name enable_vnet value false cmode driverinit $ devlink dev reload pci/0000:06:00.0 At this point devlink instance do not create auxiliary device mlx5_core.vnet.2 for the VDPA net functionality. Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-11net/mlx5: Support enable_rdma devlink dev paramParav Pandit
Enable user to disable RDMA auxiliary device so that when it is not required, user can disable it. For example, $ devlink dev param set pci/0000:06:00.0 \ name enable_rdma value false cmode driverinit $ devlink dev reload pci/0000:06:00.0 At this point devlink instance do not create auxiliary device mlx5_core.rdma.2 for the RDMA functionality. Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-11net/mlx5: Support enable_eth devlink dev paramParav Pandit
Enable user to disable Ethernet auxiliary device so that when it is not required, user can disable it. For example, $ devlink dev param set pci/0000:06:00.0 \ name enable_eth value false cmode driverinit $ devlink dev reload pci/0000:06:00.0 At this point devlink instance do not create mlx5_core.eth.2 auxiliary device for the Ethernet functionality. Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-11net/mlx5: Fix unpublish devlink parametersParav Pandit
Cleanup routine missed to unpublish the parameters. Add it. Fixes: e890acd5ff18 ("net/mlx5: Add devlink flow_steering_mode parameter") Signed-off-by: Parav Pandit <parav@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-10Merge branch 'mlx5-next' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux Saeed Mahameed says: ==================== pull-request: mlx5-next 2020-08-9 This pulls mlx5-next branch which includes patches already reviewed on net-next and rdma mailing lists. 1) mlx5 single E-Switch FDB for lag 2) IB/mlx5: Rename is_apu_thread_cq function to is_apu_cq 3) Add DCS caps & fields support [1] https://patchwork.kernel.org/project/netdevbpf/cover/20210803231959.26513-1-saeed@kernel.org/ [2] https://patchwork.kernel.org/project/netdevbpf/patch/0e3364dab7e0e4eea5423878b01aa42470be8d36.1626609184.git.leonro@nvidia.com/ [3] https://patchwork.kernel.org/project/netdevbpf/patch/55e1d69bef1fbfa5cf195c0bfcbe35c8019de35e.1624258894.git.leonro@nvidia.com/ * 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux: net/mlx5: Lag, Create shared FDB when in switchdev mode net/mlx5: E-Switch, add logic to enable shared FDB net/mlx5: Lag, move lag destruction to a workqueue net/mlx5: Lag, properly lock eswitch if needed net/mlx5: Add send to vport rules on paired device net/mlx5: E-Switch, Add event callback for representors net/mlx5e: Use shared mappings for restoring from metadata net/mlx5e: Add an option to create a shared mapping net/mlx5: E-Switch, set flow source for send to uplink rule RDMA/mlx5: Add shared FDB support {net, RDMA}/mlx5: Extend send to vport rules RDMA/mlx5: Fill port info based on the relevant eswitch net/mlx5: Lag, add initial logic for shared FDB net/mlx5: Return mdev from eswitch IB/mlx5: Rename is_apu_thread_cq function to is_apu_cq net/mlx5: Add DCS caps & fields support ==================== Link: https://lore.kernel.org/r/20210809202522.316930-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-09net: hns3: support skb's frag page recycling based on page poolYunsheng Lin
This patch adds skb's frag page recycling support based on the frag page support in page pool. The performance improves above 10~20% for single thread iperf TCP flow with IOMMU disabled when iperf server and irq/NAPI have a different CPU. The performance improves about 135%(14Gbit to 33Gbit) for single thread iperf TCP flow when IOMMU is in strict mode and iperf server shares the same cpu with irq/NAPI. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-09page_pool: keep pp info as long as page pool owns the pageYunsheng Lin
Currently, page->pp is cleared and set everytime the page is recycled, which is unnecessary. So only set the page->pp when the page is added to the page pool and only clear it when the page is released from the page pool. This is also a preparation to support allocating frag page in page pool. Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-09net: fec: fix build error for ARCH m68kJoakim Zhang
reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross make.cross ARCH=m68k m5272c3_defconfig make.cross ARCH=m68k drivers/net/ethernet/freescale/fec_main.c: In function 'fec_enet_eee_mode_set': >> drivers/net/ethernet/freescale/fec_main.c:2758:33: error: 'FEC_LPI_SLEEP' undeclared (first use in this function); did you mean 'FEC_ECR_SLEEP'? 2758 | writel(sleep_cycle, fep->hwp + FEC_LPI_SLEEP); | ^~~~~~~~~~~~~ arch/m68k/include/asm/io_no.h:25:66: note: in definition of macro '__raw_writel' 25 | #define __raw_writel(b, addr) (void)((*(__force volatile u32 *) (addr)) = (b)) | ^~~~ drivers/net/ethernet/freescale/fec_main.c:2758:2: note: in expansion of macro 'writel' 2758 | writel(sleep_cycle, fep->hwp + FEC_LPI_SLEEP); | ^~~~~~ drivers/net/ethernet/freescale/fec_main.c:2758:33: note: each undeclared identifier is reported only once for each function it appears in 2758 | writel(sleep_cycle, fep->hwp + FEC_LPI_SLEEP); | ^~~~~~~~~~~~~ arch/m68k/include/asm/io_no.h:25:66: note: in definition of macro '__raw_writel' 25 | #define __raw_writel(b, addr) (void)((*(__force volatile u32 *) (addr)) = (b)) | ^~~~ drivers/net/ethernet/freescale/fec_main.c:2758:2: note: in expansion of macro 'writel' 2758 | writel(sleep_cycle, fep->hwp + FEC_LPI_SLEEP); | ^~~~~~ >> drivers/net/ethernet/freescale/fec_main.c:2759:32: error: 'FEC_LPI_WAKE' undeclared (first use in this function) 2759 | writel(wake_cycle, fep->hwp + FEC_LPI_WAKE); | ^~~~~~~~~~~~ arch/m68k/include/asm/io_no.h:25:66: note: in definition of macro '__raw_writel' 25 | #define __raw_writel(b, addr) (void)((*(__force volatile u32 *) (addr)) = (b)) | ^~~~ drivers/net/ethernet/freescale/fec_main.c:2759:2: note: in expansion of macro 'writel' 2759 | writel(wake_cycle, fep->hwp + FEC_LPI_WAKE); | ^~~~~~ This patch adds register definition for M5272 platform to pass build. Fixes: b82f8c3f1409 ("net: fec: add eee mode tx lpi support") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Joakim Zhang <qiangqing.zhang@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-09devlink: Set device as early as possibleLeon Romanovsky
All kernel devlink implementations call to devlink_alloc() during initialization routine for specific device which is used later as a parent device for devlink_register(). Such late device assignment causes to the situation which requires us to call to device_register() before setting other parameters, but that call opens devlink to the world and makes accessible for the netlink users. Any attempt to move devlink_register() to be the last call generates the following error due to access to the devlink->dev pointer. [ 8.758862] devlink_nl_param_fill+0x2e8/0xe50 [ 8.760305] devlink_param_notify+0x6d/0x180 [ 8.760435] __devlink_params_register+0x2f1/0x670 [ 8.760558] devlink_params_register+0x1e/0x20 The simple change of API to set devlink device in the devlink_alloc() instead of devlink_register() fixes all this above and ensures that prior to call to devlink_register() everything already set. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-08devlink: Simplify devlink port API callsLeon Romanovsky
Devlink port already has pointer to the devlink instance and all API calls that forward these devlink ports to the drivers perform same "devlink_port->devlink" assignment before actual call. This patch removes useless parameter and allows us in the future to create specific devlink_port_ops to manage user space access with reliable ops assignment. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-07net: ethernet: stmmac: Do not use unreachable() in ipq806x_gmac_probe()Nathan Chancellor
When compiling with clang in certain configurations, an objtool warning appears: drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.o: warning: objtool: ipq806x_gmac_probe() falls through to next function phy_modes() This happens because the unreachable annotation in the third switch statement is not eliminated. The compiler should know that the first default case would prevent the second and third from being reached as the comment notes but sanitizer options can make it harder for the compiler to reason this out. Help the compiler out by eliminating the unreachable() annotation and unifying the default case error handling so that there is no objtool warning, the meaning of the code stays the same, and there is less duplication. Reported-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-07tulip: Remove deadcode on startup true conditionColin Ian King
The true check on the variable startable in the ternary operator is always false because the previous if statement handles the true condition for startable. Hence the ternary check is dead code and can be removed. Addresses-Coverity: ("Logically dead code") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-06net: ethernet: ti: davinci_cpdma: revert "drop frame padding"Grygorii Strashko
This reverts commit 9ffc513f95ee ("net: ethernet: ti: davinci_cpdma: drop frame padding") which has depndency from not yet merged patch [1] and so breaks cpsw_new driver. [1] https://patchwork.kernel.org/project/netdevbpf/patch/20210805145511.12016-1-grygorii.strashko@ti.com/ Fixes: 9ffc513f95ee ("net: ethernet: ti: davinci_cpdma: drop frame padding") Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Link: https://lore.kernel.org/r/20210806142809.15069-1-grygorii.strashko@ti.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-06net: ethernet: ti: am65-cpsw: use napi_complete_done() in TX completionGrygorii Strashko
This patch enables support for hard irqs deferral feature from Eric Dumazet [1] for TI K3 CPSW driver by using napi_complete_done() in TX completion path. Depending on gro_flush_timeout and napi_defer_hard_irqs at gives up to 30% CPU utilization reduction: gro_flush_timeout=50000 napi_defer_hard_irqs=2 netperf -l 10 -H 192.168.1.1 -t UDP_STREAM -c -C -- -m 1470 MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.1 () port 0 AF_INET Socket Message Elapsed Messages CPU Service Size Size Time Okay Errors Throughput Util Demand bytes bytes secs # # 10^6bits/sec % SS us/KB before: 212992 1470 10.00 809632 0 952.0 42.98 14.792 212992 10.00 809630 952.0 50.66 8.719 after: 212992 1470 10.00 813686 0 956.8 32.14 11.009 212992 10.00 813686 956.8 50.05 8.570 [1] https://lore.kernel.org/netdev/20200422161329.56026-1-edumazet@google.com/ Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-06net: ti: am65-cpsw-nuss: fix RX IRQ state after .ndo_stop()Vignesh Raghavendra
On TI K3 am64x platform the issue with RX IRQ is observed - it's become disabled forever after .ndo_stop(). The K3 CPSW driver manipulates RX IRQ by using standard Linux enable_irq()/disable_irq_nosync() API as there is no IRQ enable/disable options in CPSW HW itself, as result during .ndo_stop() following sequence happens phy_stop() teardown TX/RX channels wait for TX tdown complete napi_disable(TX) clean up TX channels (a) napi_disable(RX) At point (a) it's not possible to predict if RX IRQ was triggered or not. if RX IRQ was triggered then it also not possible to definitely say if RX NAPI was run or only scheduled and immediately canceled by napi_disable(RX). Actually the last case causes RX IRQ to be permanently disabled. Another observed issue is that RX IRQ enable counter become unbalanced if (gro_flush_timeout =! 0) while (napi_defer_hard_irqs == 0): Unbalanced enable for IRQ 44 WARNING: CPU: 0 PID: 10 at ../kernel/irq/manage.c:776 __enable_irq+0x38/0x80 __enable_irq+0x38/0x80 enable_irq+0x54/0xb0 am65_cpsw_nuss_rx_poll+0x2f4/0x368 __napi_poll+0x34/0x1b8 net_rx_action+0xe4/0x220 _stext+0x11c/0x284 run_ksoftirqd+0x4c/0x60 To avoid above issues introduce flag indicating if RX was actually disabled before enabling it in am65_cpsw_nuss_rx_poll() and restore RX IRQ state in .ndo_open() Fixes: 4f7cce272403 ("net: ethernet: ti: am65-cpsw: add support for am64x cpsw3g") Signed-off-by: Vignesh Raghavendra <vigneshr@ti.com> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-06net: ethernet: ti: davinci_cpdma: drop frame paddingGrygorii Strashko
Hence all users of davinci_cpdma switched to skb_put_padto() the frame padding can be removed from it. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-06net: ethernet: ti: davinci_emac: switch to use skb_put_padto()Grygorii Strashko
Use skb_put_padto() instead of skb_padto() so skb->len also got updated, as preparation for further removing frame padding from cpdma. It also makes xmit path more clear and linear. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-06net: ethernet: ti: cpsw: switch to use skb_put_padto()Grygorii Strashko
Use skb_put_padto() instead of skb_padto() so skb->len also got updated, as preparation for further removing frame padding from cpdma. It also makes xmit path more clear and linear. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Build failure in drivers/net/wwan/mhi_wwan_mbim.c: add missing parameter (0, assuming we don't want buffer pre-alloc). Conflict in drivers/net/dsa/sja1105/sja1105_main.c between: 589918df9322 ("net: dsa: sja1105: be stateless with FDB entries on SJA1105P/Q/R/S/SJA1110 too") 0fac6aa098ed ("net: dsa: sja1105: delete the best_effort_vlan_filtering mode") Follow the instructions from the commit message of the former commit - removed the if conditions. When looking at commit 589918df9322 ("net: dsa: sja1105: be stateless with FDB entries on SJA1105P/Q/R/S/SJA1110 too") note that the mask_iotag fields get removed by the following patch. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-05net/mlx5: Lag, Create shared FDB when in switchdev modeMark Bloch
If both eswitches are in switchdev mode and the uplink representors are enslaved to the same bond device create a shared FDB configuration. When moving to shared FDB mode not only the hardware needs be configured but the RDMA driver needs to reconfigure itself. When such change is done, unload the RDMA devices, configure the hardware and load the RDMA representors. When destroying the lag (can happen if a PCI function is unbinded, driver is unloaded or by just removing a netdev from the bond) make sure to restore the system to the previous state only if possible. For example, if a PCI function is unbinded there is no need to load the representors as the device is going away. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: E-Switch, add logic to enable shared FDBMark Bloch
Shared FDB allows to direct traffic from all the vports in the HCA to a single eswitch. In order to do that three things are needed. 1) Point the ingress ACL of the slave uplink to that of the master. With this, wire traffic from both uplinks will reach the same eswitch with the same metadata where a single steering rule can catch traffic from both ports. 2) Set the FDB root flow table of the slave's eswitch to that of the master. As this flow table can change dynamically make sure to sync it on any set root flow table FDB command. This will make sure traffic from SFs, VFs, ECPFs and PFs reach the master eswitch. 3) Split wire traffic at the eswitch manager egress ACL so that it's directed to the native eswitch manager. We only treat wire traffic from both ports the same at the eswitch level. If such traffic wasn't handled in the eswitch it needs to reach the right representor to be processed by software. For example LACP packets should *always* reach the right uplink representor for correct operation. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: Lag, move lag destruction to a workqueueMark Bloch
If a netdev is removed from the lag the lag should be destroyed. With downstream patches this might trigger a reconfiguration of representors on a different eswitch and such we don't have the proper locking to so from this path. Move the destruction to be done by the workqueue. As the destruction won't affect the netdev side it okay to do so. The RDMA side will be reconfigured and it already coded to handle such reconfiguration. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: Lag, properly lock eswitch if neededMark Bloch
Currently when doing hardware lag we check the eswitch mode but as this isn't done under a lock the check isn't valid. As the code needs to sync between two different devices an extra care is needed. - When going to change eswitch mode, if hardware lag is active destroy it. - While changing eswitch modes block any hardware bond creation. - Delay handling bonding events until there are no mode changes in progress. - When attaching a new mdev to lag, block until there is no mode change in progress. In order for the mode change to finish the interface lock will have to be taken. Release the lock and sleep for 100ms to allow forward progress. As this is a very rare condition (can happen if the user unbinds and binds a PCI function while also changing eswitch mode of the other PCI function) it has no real world impact. As taking multiple eswitch mode locks is now required lockdep will complain about a possible deadlock. Register a key per eswitch to make lockdep happy. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: Add send to vport rules on paired deviceMark Bloch
When two mlx5 devices are paired in switchdev mode, always offload the send-to-vport rule to the peer E-Switch. This allows to abstract the logic when this is really necessary (single FDB) and combine the logic of both cases into one. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: E-Switch, Add event callback for representorsMark Bloch
This callback will allow to notify representors about relevant events when in OFFLOADS mode. In downstream patches, this will be used to notify about PAIR/UNPAIR devcom events. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5e: Use shared mappings for restoring from metadataRoi Dayan
FTEs are added with mapped metadata which is saved per eswitch. When uplink reps are bonded and we are in a single FDB mode, we could fail to find metadata which was stored on one eswitch mapping but not the other or with a different id. To resolve this issue use shared mapping between eswitch ports. We do not have any conflict using a single mapping, for a type, between the ports. Signed-off-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5e: Add an option to create a shared mappingRoi Dayan
The shared mapping is identified by an id and type. Signed-off-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: E-Switch, set flow source for send to uplink ruleAriel Levkovich
Set the flow source param to local vport for the uplink rep send-to-vport rule. This will comply with the recent changes in SW steering that use the flow source as an indication for the rule type - rx or tx. Since the uplink send-to-vport rule is forwarding traffic to the wire it has to indicate that it is an sx rule and can't use the any port value in the flow source. Signed-off-by: Ariel Levkovich <lariel@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05{net, RDMA}/mlx5: Extend send to vport rulesMark Bloch
In shared FDB there is only one eswitch which is active and it receives traffic from all representors and all vports in the HCA. While the Ethernet representor will always reside on its native PF the IB representor will not. Extend send to vport rule creation to support such flows. Need to account for source vport that sends the traffic (on which the representors resides) and the target eswitch the traffic which reach. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: Lag, add initial logic for shared FDBMark Bloch
As shared FDB requires changes in two subsystems first expose the needed core functions so the RDMA side can be changed. mlx5_lag_is_master(): return true if a given mlx5 device is the lag master. mlx5_lag_is_shared_fdb(): Returns true if the lag mode is shared FDB. mlx5_lag_get_peer_mdev(): Return the peer mdev in lag. The mentioned functions will be used by downstream patches in order to add support for shared FDB for the RDMA side. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net/mlx5: Return mdev from eswitchMark Bloch
Export a function so users can retrieve the mellanox device that manages the eswitch from the eswitch device. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Mark Zhang <markzhang@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-08-05net: vxge: fix use-after-free in vxge_device_unregisterPavel Skripkin
Smatch says: drivers/net/ethernet/neterion/vxge/vxge-main.c:3518 vxge_device_unregister() error: Using vdev after free_{netdev,candev}(dev); drivers/net/ethernet/neterion/vxge/vxge-main.c:3518 vxge_device_unregister() error: Using vdev after free_{netdev,candev}(dev); drivers/net/ethernet/neterion/vxge/vxge-main.c:3520 vxge_device_unregister() error: Using vdev after free_{netdev,candev}(dev); drivers/net/ethernet/neterion/vxge/vxge-main.c:3520 vxge_device_unregister() error: Using vdev after free_{netdev,candev}(dev); Since vdev pointer is netdev private data accessing it after free_netdev() call can cause use-after-free bug. Fix it by moving free_netdev() call at the end of the function Fixes: 6cca200362b4 ("vxge: cleanup probe error paths") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-05net: fec: fix use-after-free in fec_drv_removePavel Skripkin
Smatch says: drivers/net/ethernet/freescale/fec_main.c:3994 fec_drv_remove() error: Using fep after free_{netdev,candev}(ndev); drivers/net/ethernet/freescale/fec_main.c:3995 fec_drv_remove() error: Using fep after free_{netdev,candev}(ndev); Since fep pointer is netdev private data, accessing it after free_netdev() call can cause use-after-free bug. Fix it by moving free_netdev() call at the end of the function Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: a31eda65ba21 ("net: fec: fix clock count mis-match") Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Reviewed-by: Joakim Zhang <qiangqing.zhang@nxp.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-08-05net: ethernet: ti: am65-cpsw: fix crash in ↵Grygorii Strashko
am65_cpsw_port_offload_fwd_mark_update() The am65_cpsw_port_offload_fwd_mark_update() causes NULL exception crash when there is at least one disabled port and any other port added to the bridge first time. Unable to handle kernel NULL pointer dereference at virtual address 0000000000000858 pc : am65_cpsw_port_offload_fwd_mark_update+0x54/0x68 lr : am65_cpsw_netdevice_event+0x8c/0xf0 Call trace: am65_cpsw_port_offload_fwd_mark_update+0x54/0x68 notifier_call_chain+0x54/0x98 raw_notifier_call_chain+0x14/0x20 call_netdevice_notifiers_info+0x34/0x78 __netdev_upper_dev_link+0x1c8/0x290 netdev_master_upper_dev_link+0x1c/0x28 br_add_if+0x3f0/0x6d0 [bridge] Fix it by adding proper check for port->ndev != NULL. Fixes: 2934db9bcb30 ("net: ti: am65-cpsw-nuss: Add netdevice notifiers") Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-08-05bnx2x: fix an error code in bnx2x_nic_load()Dan Carpenter
Set the error code if bnx2x_alloc_fw_stats_mem() fails. The current code returns success. Fixes: ad5afc89365e ("bnx2x: Separate VF and PF logic") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>