summaryrefslogtreecommitdiff
path: root/drivers/net
AgeCommit message (Collapse)Author
2023-10-20ixgbe: fix end of loop test in ixgbe_set_vf_macvlan()Dan Carpenter
The list iterator in a list_for_each_entry() loop can never be NULL. If the loop exits without hitting a break then the iterator points to an offset off the list head and dereferencing it is an out of bounds access. Before we transitioned to using list_for_each_entry() loops, then it was possible for "entry" to be NULL and the comments mention this. I have updated the comments to match the new code. Fixes: c1fec890458a ("ethernet/intel: Use list_for_each_entry() helper") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20igb: Fix an end of loop testDan Carpenter
When we exit a list_for_each_entry() without hitting a break statement, the list iterator isn't NULL, it just point to an offset off the list_head. In that situation, it wouldn't be too surprising for entry->free to be true and we end up corrupting memory. The way to test for these is to just set a flag. Fixes: c1fec890458a ("ethernet/intel: Use list_for_each_entry() helper") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20ice: cleanup ice_find_netlist_nodeJacob Keller
The ice_find_netlist_node function was introduced in commit 8a3a565ff210 ("ice: add admin commands to access cgu configuration"). Variations of this function were reviewed concurrently on both intel-wired-lan[1][2], and netdev [3][4] [1]: https://lore.kernel.org/intel-wired-lan/20230913204943.1051233-7-vadim.fedorenko@linux.dev/ [2]: https://lore.kernel.org/intel-wired-lan/20230817000058.2433236-5-jacob.e.keller@intel.com/ [3]: https://lore.kernel.org/netdev/20230918212814.435688-1-anthony.l.nguyen@intel.com/ [4]: https://lore.kernel.org/netdev/20230913204943.1051233-7-vadim.fedorenko@linux.dev/ The variant I posted had a few changes due to review feedback which were never incorporated into the DPLL series: * Replace the references to ancient and long removed ICE_SUCCESS and ICE_ERR_DOES_NOT_EXIST status codes in the function comment. * Return -ENOENT instead of -ENOTBLK, as a more common way to indicate that an entry doesn't exist. * Avoid the use of memset() and use simple static initialization for the cmd variable. * Use FIELD_PREP to assign the node_type_ctx. * Remove an unnecessary local variable to keep track of rec_node_handle, just pass the node_handle pointer directly into ice_aq_get_netlist_node. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20ice: make ice_get_pf_c827_idx staticJacob Keller
The ice_get_pf_c827_idx function is only called inside of ice_ptp_hw.c, so there is no reason to export it. Mark it static and remove the declaration from ice_ptp_hw.h Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20ice: manage VFs MSI-X using resource trackingMichal Swiatkowski
Track MSI-X for VFs using bitmap, by setting and clearing bitmap during allocation and freeing. Try to linearize irqs usage for VFs, by freeing them and allocating once again. Do it only for VFs that aren't currently running. Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20ice: set MSI-X vector count on VFMichal Swiatkowski
Implement ops needed to set MSI-X vector count on VF. sriov_get_vf_total_msix() should return total number of MSI-X that can be used by the VFs. Return the value set by devlink resources API (pf->req_msix.vf). sriov_set_msix_vec_count() will set number of MSI-X on particular VF. Disable VF register mapping, rebuild VSI with new MSI-X and queues values and enable new VF register mapping. For best performance set number of queues equal to number of MSI-X. Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20ice: add bitmap to track VF MSI-X usageMichal Swiatkowski
Create a bitamp to track MSI-X usage for VFs. The bitmap has the size of total MSI-X amount on device, because at init time the amount of MSI-X used by VFs isn't known. The bitmap is used in follow up patchset to provide a block of continuous block of MSI-X indexes for each created VF. Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20ice: implement num_msix field per VFMichal Swiatkowski
Store the amount of MSI-X per VF instead of storing it in pf struct. It is used to calculate number of q_vectors (and queues) for VF VSI. This is necessary because with follow up changes the number of MSI-X can be different between VFs. Use it instead of using pf->vf_msix value in all cases. Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20ice: store VF's pci_dev ptr in ice_vfPrzemek Kitszel
Extend struct ice_vf by vfdev. Calculation of vfdev falls more nicely into ice_create_vf_entries(). Caching of vfdev enables simplification of ice_restore_all_vfs_msi_state(). Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Co-developed-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20ice: add drop rule matching on not active lportMichal Swiatkowski
Inactive LAG port should not receive any packets, as it can cause adding invalid FDBs (bridge offload). Add a drop rule matching on inactive lport in LAG. Reviewed-by: Simon Horman <horms@kernel.org> Co-developed-by: Marcin Szycik <marcin.szycik@intel.com> Signed-off-by: Marcin Szycik <marcin.szycik@intel.com> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20ice: remove unused ice_flow_entry fieldsPrzemek Kitszel
Remove ::entry and ::entry_sz fields of &ice_flow_entry, as they were never set. Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20i40e: Fix I40E_FLAG_VF_VLAN_PRUNING valueIvan Vecera
Commit c87c938f62d8f1 ("i40e: Add VF VLAN pruning") added new PF flag I40E_FLAG_VF_VLAN_PRUNING but its value collides with existing I40E_FLAG_TOTAL_PORT_SHUTDOWN_ENABLED flag. Move the affected flag at the end of the flags and fix its value. Reproducer: [root@cnb-03 ~]# ethtool --set-priv-flags enp2s0f0np0 link-down-on-close on [root@cnb-03 ~]# ethtool --set-priv-flags enp2s0f0np0 vf-vlan-pruning on [root@cnb-03 ~]# ethtool --set-priv-flags enp2s0f0np0 link-down-on-close off [ 6323.142585] i40e 0000:02:00.0: Setting link-down-on-close not supported on this port (because total-port-shutdown is enabled) netlink error: Operation not supported [root@cnb-03 ~]# ethtool --set-priv-flags enp2s0f0np0 vf-vlan-pruning off [root@cnb-03 ~]# ethtool --set-priv-flags enp2s0f0np0 link-down-on-close off The link-down-on-close flag cannot be modified after setting vf-vlan-pruning because vf-vlan-pruning shares the same bit with total-port-shutdown flag that prevents any modification of link-down-on-close flag. Fixes: c87c938f62d8 ("i40e: Add VF VLAN pruning") Cc: Mateusz Palczewski <mateusz.palczewski@intel.com> Cc: Simon Horman <horms@kernel.org> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20net: phy: micrel: Fix forced link mode for KSZ886X switchesOleksij Rempel
Address a link speed detection issue in KSZ886X PHY driver when in forced link mode. Previously, link partners like "ASIX AX88772B" with KSZ8873 could fall back to 10Mbit instead of configured 100Mbit. The issue arises as KSZ886X PHY continues sending Fast Link Pulses (FLPs) even with autonegotiation off, misleading link partners in autoneg mode, leading to incorrect link speed detection. Now, when autonegotiation is disabled, the driver sets the link state forcefully using KSZ886X_CTRL_FORCE_LINK bit. This action, beyond just disabling autonegotiation, makes the PHY state more reliably detected by link partners using parallel detection, thus fixing the link speed misconfiguration. With autonegotiation enabled, link state is not forced, allowing proper autonegotiation process participation. Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Divya Koppera <divya.koppera@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20net: dsa: microchip: ksz8: Enable MIIM PHY Control reg accessOleksij Rempel
Provide access to MIIM PHY Control register (Reg. 31) through ksz8_r_phy_ctrl() and ksz8_w_phy_ctrl() functions. Necessary for upcoming micrel.c patch to address forced link mode configuration. Closes: https://lore.kernel.org/oe-kbuild-all/202310112224.iYgvjBUy-lkp@intel.com/ Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: spectrum: Set SW LAG mode on Spectrum>1Petr Machata
On Spectrum-2, Spectrum-3 and Spectrum-4 machines, request SW responsibility for placement of the LAG table. On Spectrum-1, some FW versions claim to support lag_mode field despite quietly ignoring any settings made to that field. Thus refrain from attempting to configure lag_mode on those systems at all. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: spectrum: Allocate LAG table when in SW LAG modePetr Machata
In this patch, if the LAG mode is SW, allocate the LAG table and configure SGCR to indicate where it was allocated. We use the default "DDD" (for dynamic data duplication) layout of the LAG table. In the DDD mode, the membership information for each LAG is copied in 8 PGT entries. This is done for performance reasons. The LAG table then needs to be allocated on an address aligned to 8. Deal with this by moving the LAG init ahead so that the LAG table is allocated at address 0. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: spectrum_pgt: Generalize PGT allocationPetr Machata
PGT blocks are allocated through the function mlxsw_sp_pgt_mid_alloc_range(). The interface assumes that the caller knows which piece of PGT exactly they want to get. That was fine while the FID code was the only client allocating blocks of PGT. However for SW-allocated LAG table, there will be an additional client: mlxsw_sp_lag_init(). The interface should therefore be changed to not require particular coordinates, but to take just the requested size, allocate the block wherever, and give back the PGT address. In this patch, change the interface accordingly. Initialize FID family's pgt_base from the result of the PGT allocation (note that mlxsw makes a copy of the family structure, so what gets initialized is not actually the global structure). Drop the now-unnecessary pgt_base initializations and the corresponding defines. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: spectrum_fid: Allocate PGT for the whole FID family in one goPetr Machata
PGT blocks are allocated through the function mlxsw_sp_pgt_mid_alloc_range(). The interface assumes that the caller knows which piece of PGT exactly they want to get. That was fine while the FID code was the only client allocating blocks of PGT. However for SW-allocated LAG table, there will be an additional client: mlxsw_sp_lag_init(). The interface should therefore be changed to not require particular coordinates, but to take just the requested size, allocate the block wherever, and give back the PGT address. The current FID mode has one place where PGT address can be stored: the FID family's pgt_base. The allocation scheme should therefore be changed from allocating a block per FID flood table, to allocating a block per FID family. Do just that in this patch. The per-family allocation is going to be useful for another related feature as well: the CFF mode. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: pci: Permit toggling LAG modePetr Machata
Add to struct mlxsw_config_profile a field lag_mode_prefer_sw for the driver to indicate that SW LAG mode should be configured if possible. Add to the PCI module code to set lag_mode as appropriate. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: core, pci: Add plumbing related to LAG modePetr Machata
lag_mode describes where the responsibility for LAG table placement lies: SW or FW. The bus module determines whether LAG is supported, can configure it if it is, and knows what (if any) configuration has been applied. Therefore add a bus callback to determine the configured LAG mode. Also add to core an API to query it. The LAG mode is for now kept at the default value of 0 for FW-managed. The code to actually toggle it will be added later. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: cmd: Add QUERY_FW.lag_mode_supportPetr Machata
Add QUERY_FW.lag_mode_support, which determines whether CONFIG_PROFILE.lag_mode is available. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: cmd: Add CONFIG_PROFILE.{set_, }lag_modePetr Machata
Add CONFIG_PROFILE.lag_mode, which serves for moving responsibility for placement of the LAG table from FW to SW. Whether lag_mode should be configured is determined by CONFIG_PROFILE.set_lag_mode, which also add. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: cmd: Fix omissions in CONFIG_PROFILE field names in commentsPetr Machata
A number of CONFIG_PROFILE fields' comments refer to a field named like cmd_mbox_config_* instead of cmd_mbox_config_profile_*. Correct these omissions. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: reg: Add SGCR.lag_lookup_pgt_basePetr Machata
Add SGCR.lag_lookup_pgt_base, which is used for configuring the base address of the LAG table within the PGT table for cases when the driver is responsible for the table placement. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: reg: Drop SGCR.llbPetr Machata
SGCR, Switch General Configuration Register, has not been used since commit b0d80c013b04 ("mlxsw: Remove Mellanox SwitchX-2 ASIC support"). We will need the register again shortly, so instead of dropping it and reintroducing again, just drop the sole unused field. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20iavf: initialize waitqueues before starting watchdog_taskMichal Schmidt
It is not safe to initialize the waitqueues after queueing the watchdog_task. It will be using them. The chance of this causing a real problem is very small, because there will be some sleeping before any of the waitqueues get used. I got a crash only after inserting an artificial sleep in iavf_probe. Queue the watchdog_task as the last step in iavf_probe. Add a comment to prevent repeating the mistake. Fixes: fe2647ab0c99 ("i40evf: prevent VF close returning before state transitions to DOWN") Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20qed: devlink health: use retained error fmsg APIPrzemek Kitszel
Drop unneeded error checking. devlink_fmsg_*() family of functions is now retaining errors, so there is no need to check for them after each call. Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20net/mlx5: devlink health: use retained error fmsg APIPrzemek Kitszel
Drop unneeded error checking. devlink_fmsg_*() family of functions is now retaining errors, so there is no need to check for them after each call. Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20mlxsw: core: devlink health: use retained error fmsg APIPrzemek Kitszel
Drop unneeded error checking. devlink_fmsg_*() family of functions is now retaining errors, so there is no need to check for them after each call. Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20octeontx2-af: devlink health: use retained error fmsg APIPrzemek Kitszel
Drop unneeded error checking. devlink_fmsg_*() family of functions is now retaining errors, so there is no need to check for them after each call. Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20hinic: devlink health: use retained error fmsg APIPrzemek Kitszel
Drop unneeded error checking. devlink_fmsg_*() family of functions is now retaining errors, so there is no need to check for them after each call. Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20bnxt_en: devlink health: use retained error fmsg APIPrzemek Kitszel
Drop unneeded error checking. devlink_fmsg_*() family of functions is now retaining errors, so there is no need to check for them after each call. Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20pds_core: devlink health: use retained error fmsg APIPrzemek Kitszel
Drop unneeded error checking. devlink_fmsg_*() family of functions is now retaining errors, so there is no need to check for them after each call. Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20netdevsim: devlink health: use retained error fmsg APIPrzemek Kitszel
Drop unneeded error checking. devlink_fmsg_*() family of functions is now retaining errors, so there is no need to check for them after each call. Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20r8169: fix the KCSAN reported data race in rtl_rx while reading desc->opts1Mirsad Goran Todorovac
KCSAN reported the following data-race bug: ================================================================== BUG: KCSAN: data-race in rtl8169_poll (drivers/net/ethernet/realtek/r8169_main.c:4430 drivers/net/ethernet/realtek/r8169_main.c:4583) r8169 race at unknown origin, with read to 0xffff888117e43510 of 4 bytes by interrupt on cpu 21: rtl8169_poll (drivers/net/ethernet/realtek/r8169_main.c:4430 drivers/net/ethernet/realtek/r8169_main.c:4583) r8169 __napi_poll (net/core/dev.c:6527) net_rx_action (net/core/dev.c:6596 net/core/dev.c:6727) __do_softirq (kernel/softirq.c:553) __irq_exit_rcu (kernel/softirq.c:427 kernel/softirq.c:632) irq_exit_rcu (kernel/softirq.c:647) sysvec_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:1074 (discriminator 14)) asm_sysvec_apic_timer_interrupt (./arch/x86/include/asm/idtentry.h:645) cpuidle_enter_state (drivers/cpuidle/cpuidle.c:291) cpuidle_enter (drivers/cpuidle/cpuidle.c:390) call_cpuidle (kernel/sched/idle.c:135) do_idle (kernel/sched/idle.c:219 kernel/sched/idle.c:282) cpu_startup_entry (kernel/sched/idle.c:378 (discriminator 1)) start_secondary (arch/x86/kernel/smpboot.c:210 arch/x86/kernel/smpboot.c:294) secondary_startup_64_no_verify (arch/x86/kernel/head_64.S:433) value changed: 0x80003fff -> 0x3402805f Reported by Kernel Concurrency Sanitizer on: CPU: 21 PID: 0 Comm: swapper/21 Tainted: G L 6.6.0-rc2-kcsan-00143-gb5cbe7c00aa0 #41 Hardware name: ASRock X670E PG Lightning/X670E PG Lightning, BIOS 1.21 04/26/2023 ================================================================== drivers/net/ethernet/realtek/r8169_main.c: ========================================== 4429 → 4430 status = le32_to_cpu(desc->opts1); 4431 if (status & DescOwn) 4432 break; 4433 4434 /* This barrier is needed to keep us from reading 4435 * any other fields out of the Rx descriptor until 4436 * we know the status of DescOwn 4437 */ 4438 dma_rmb(); 4439 4440 if (unlikely(status & RxRES)) { 4441 if (net_ratelimit()) 4442 netdev_warn(dev, "Rx ERROR. status = %08x\n", Marco Elver explained that dma_rmb() doesn't prevent the compiler to tear up the access to desc->opts1 which can be written to concurrently. READ_ONCE() should prevent that from happening: 4429 → 4430 status = le32_to_cpu(READ_ONCE(desc->opts1)); 4431 if (status & DescOwn) 4432 break; 4433 As the consequence of this fix, this KCSAN warning was eliminated. Fixes: 6202806e7c03a ("r8169: drop member opts1_mask from struct rtl8169_private") Suggested-by: Marco Elver <elver@google.com> Cc: Heiner Kallweit <hkallweit1@gmail.com> Cc: nic_swsd@realtek.com Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: netdev@vger.kernel.org Link: https://lore.kernel.org/lkml/dc7fc8fa-4ea4-e9a9-30a6-7c83e6b53188@alu.unizg.hr/ Signed-off-by: Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr> Acked-by: Marco Elver <elver@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20r8169: fix the KCSAN reported data-race in rtl_tx while reading ↵Mirsad Goran Todorovac
TxDescArray[entry].opts1 KCSAN reported the following data-race: ================================================================== BUG: KCSAN: data-race in rtl8169_poll (drivers/net/ethernet/realtek/r8169_main.c:4368 drivers/net/ethernet/realtek/r8169_main.c:4581) r8169 race at unknown origin, with read to 0xffff888140d37570 of 4 bytes by interrupt on cpu 21: rtl8169_poll (drivers/net/ethernet/realtek/r8169_main.c:4368 drivers/net/ethernet/realtek/r8169_main.c:4581) r8169 __napi_poll (net/core/dev.c:6527) net_rx_action (net/core/dev.c:6596 net/core/dev.c:6727) __do_softirq (kernel/softirq.c:553) __irq_exit_rcu (kernel/softirq.c:427 kernel/softirq.c:632) irq_exit_rcu (kernel/softirq.c:647) sysvec_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:1074 (discriminator 14)) asm_sysvec_apic_timer_interrupt (./arch/x86/include/asm/idtentry.h:645) cpuidle_enter_state (drivers/cpuidle/cpuidle.c:291) cpuidle_enter (drivers/cpuidle/cpuidle.c:390) call_cpuidle (kernel/sched/idle.c:135) do_idle (kernel/sched/idle.c:219 kernel/sched/idle.c:282) cpu_startup_entry (kernel/sched/idle.c:378 (discriminator 1)) start_secondary (arch/x86/kernel/smpboot.c:210 arch/x86/kernel/smpboot.c:294) secondary_startup_64_no_verify (arch/x86/kernel/head_64.S:433) value changed: 0xb0000042 -> 0x00000000 Reported by Kernel Concurrency Sanitizer on: CPU: 21 PID: 0 Comm: swapper/21 Tainted: G L 6.6.0-rc2-kcsan-00143-gb5cbe7c00aa0 #41 Hardware name: ASRock X670E PG Lightning/X670E PG Lightning, BIOS 1.21 04/26/2023 ================================================================== The read side is in drivers/net/ethernet/realtek/r8169_main.c ========================================= 4355 static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp, 4356 int budget) 4357 { 4358 unsigned int dirty_tx, bytes_compl = 0, pkts_compl = 0; 4359 struct sk_buff *skb; 4360 4361 dirty_tx = tp->dirty_tx; 4362 4363 while (READ_ONCE(tp->cur_tx) != dirty_tx) { 4364 unsigned int entry = dirty_tx % NUM_TX_DESC; 4365 u32 status; 4366 → 4367 status = le32_to_cpu(tp->TxDescArray[entry].opts1); 4368 if (status & DescOwn) 4369 break; 4370 4371 skb = tp->tx_skb[entry].skb; 4372 rtl8169_unmap_tx_skb(tp, entry); 4373 4374 if (skb) { 4375 pkts_compl++; 4376 bytes_compl += skb->len; 4377 napi_consume_skb(skb, budget); 4378 } 4379 dirty_tx++; 4380 } 4381 4382 if (tp->dirty_tx != dirty_tx) { 4383 dev_sw_netstats_tx_add(dev, pkts_compl, bytes_compl); 4384 WRITE_ONCE(tp->dirty_tx, dirty_tx); 4385 4386 netif_subqueue_completed_wake(dev, 0, pkts_compl, bytes_compl, 4387 rtl_tx_slots_avail(tp), 4388 R8169_TX_START_THRS); 4389 /* 4390 * 8168 hack: TxPoll requests are lost when the Tx packets are 4391 * too close. Let's kick an extra TxPoll request when a burst 4392 * of start_xmit activity is detected (if it is not detected, 4393 * it is slow enough). -- FR 4394 * If skb is NULL then we come here again once a tx irq is 4395 * triggered after the last fragment is marked transmitted. 4396 */ 4397 if (READ_ONCE(tp->cur_tx) != dirty_tx && skb) 4398 rtl8169_doorbell(tp); 4399 } 4400 } tp->TxDescArray[entry].opts1 is reported to have a data-race and READ_ONCE() fixes this KCSAN warning. 4366 → 4367 status = le32_to_cpu(READ_ONCE(tp->TxDescArray[entry].opts1)); 4368 if (status & DescOwn) 4369 break; 4370 Cc: Heiner Kallweit <hkallweit1@gmail.com> Cc: nic_swsd@realtek.com Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Marco Elver <elver@google.com> Cc: netdev@vger.kernel.org Link: https://lore.kernel.org/lkml/dc7fc8fa-4ea4-e9a9-30a6-7c83e6b53188@alu.unizg.hr/ Signed-off-by: Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr> Acked-by: Marco Elver <elver@google.com> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-20r8169: fix the KCSAN reported data-race in rtl_tx() while reading tp->cur_txMirsad Goran Todorovac
KCSAN reported the following data-race: ================================================================== BUG: KCSAN: data-race in rtl8169_poll [r8169] / rtl8169_start_xmit [r8169] write (marked) to 0xffff888102474b74 of 4 bytes by task 5358 on cpu 29: rtl8169_start_xmit (drivers/net/ethernet/realtek/r8169_main.c:4254) r8169 dev_hard_start_xmit (./include/linux/netdevice.h:4889 ./include/linux/netdevice.h:4903 net/core/dev.c:3544 net/core/dev.c:3560) sch_direct_xmit (net/sched/sch_generic.c:342) __dev_queue_xmit (net/core/dev.c:3817 net/core/dev.c:4306) ip_finish_output2 (./include/linux/netdevice.h:3082 ./include/net/neighbour.h:526 ./include/net/neighbour.h:540 net/ipv4/ip_output.c:233) __ip_finish_output (net/ipv4/ip_output.c:311 net/ipv4/ip_output.c:293) ip_finish_output (net/ipv4/ip_output.c:328) ip_output (net/ipv4/ip_output.c:435) ip_send_skb (./include/net/dst.h:458 net/ipv4/ip_output.c:127 net/ipv4/ip_output.c:1486) udp_send_skb (net/ipv4/udp.c:963) udp_sendmsg (net/ipv4/udp.c:1246) inet_sendmsg (net/ipv4/af_inet.c:840 (discriminator 4)) sock_sendmsg (net/socket.c:730 net/socket.c:753) __sys_sendto (net/socket.c:2177) __x64_sys_sendto (net/socket.c:2185) do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80) entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:120) read to 0xffff888102474b74 of 4 bytes by interrupt on cpu 21: rtl8169_poll (drivers/net/ethernet/realtek/r8169_main.c:4397 drivers/net/ethernet/realtek/r8169_main.c:4581) r8169 __napi_poll (net/core/dev.c:6527) net_rx_action (net/core/dev.c:6596 net/core/dev.c:6727) __do_softirq (kernel/softirq.c:553) __irq_exit_rcu (kernel/softirq.c:427 kernel/softirq.c:632) irq_exit_rcu (kernel/softirq.c:647) common_interrupt (arch/x86/kernel/irq.c:247 (discriminator 14)) asm_common_interrupt (./arch/x86/include/asm/idtentry.h:636) cpuidle_enter_state (drivers/cpuidle/cpuidle.c:291) cpuidle_enter (drivers/cpuidle/cpuidle.c:390) call_cpuidle (kernel/sched/idle.c:135) do_idle (kernel/sched/idle.c:219 kernel/sched/idle.c:282) cpu_startup_entry (kernel/sched/idle.c:378 (discriminator 1)) start_secondary (arch/x86/kernel/smpboot.c:210 arch/x86/kernel/smpboot.c:294) secondary_startup_64_no_verify (arch/x86/kernel/head_64.S:433) value changed: 0x002f4815 -> 0x002f4816 Reported by Kernel Concurrency Sanitizer on: CPU: 21 PID: 0 Comm: swapper/21 Tainted: G L 6.6.0-rc2-kcsan-00143-gb5cbe7c00aa0 #41 Hardware name: ASRock X670E PG Lightning/X670E PG Lightning, BIOS 1.21 04/26/2023 ================================================================== The write side of drivers/net/ethernet/realtek/r8169_main.c is: ================== 4251 /* rtl_tx needs to see descriptor changes before updated tp->cur_tx */ 4252 smp_wmb(); 4253 → 4254 WRITE_ONCE(tp->cur_tx, tp->cur_tx + frags + 1); 4255 4256 stop_queue = !netif_subqueue_maybe_stop(dev, 0, rtl_tx_slots_avail(tp), 4257 R8169_TX_STOP_THRS, 4258 R8169_TX_START_THRS); The read side is the function rtl_tx(): 4355 static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp, 4356 int budget) 4357 { 4358 unsigned int dirty_tx, bytes_compl = 0, pkts_compl = 0; 4359 struct sk_buff *skb; 4360 4361 dirty_tx = tp->dirty_tx; 4362 4363 while (READ_ONCE(tp->cur_tx) != dirty_tx) { 4364 unsigned int entry = dirty_tx % NUM_TX_DESC; 4365 u32 status; 4366 4367 status = le32_to_cpu(tp->TxDescArray[entry].opts1); 4368 if (status & DescOwn) 4369 break; 4370 4371 skb = tp->tx_skb[entry].skb; 4372 rtl8169_unmap_tx_skb(tp, entry); 4373 4374 if (skb) { 4375 pkts_compl++; 4376 bytes_compl += skb->len; 4377 napi_consume_skb(skb, budget); 4378 } 4379 dirty_tx++; 4380 } 4381 4382 if (tp->dirty_tx != dirty_tx) { 4383 dev_sw_netstats_tx_add(dev, pkts_compl, bytes_compl); 4384 WRITE_ONCE(tp->dirty_tx, dirty_tx); 4385 4386 netif_subqueue_completed_wake(dev, 0, pkts_compl, bytes_compl, 4387 rtl_tx_slots_avail(tp), 4388 R8169_TX_START_THRS); 4389 /* 4390 * 8168 hack: TxPoll requests are lost when the Tx packets are 4391 * too close. Let's kick an extra TxPoll request when a burst 4392 * of start_xmit activity is detected (if it is not detected, 4393 * it is slow enough). -- FR 4394 * If skb is NULL then we come here again once a tx irq is 4395 * triggered after the last fragment is marked transmitted. 4396 */ → 4397 if (tp->cur_tx != dirty_tx && skb) 4398 rtl8169_doorbell(tp); 4399 } 4400 } Obviously from the code, an earlier detected data-race for tp->cur_tx was fixed in the line 4363: 4363 while (READ_ONCE(tp->cur_tx) != dirty_tx) { but the same solution is required for protecting the other access to tp->cur_tx: → 4397 if (READ_ONCE(tp->cur_tx) != dirty_tx && skb) 4398 rtl8169_doorbell(tp); The write in the line 4254 is protected with WRITE_ONCE(), but the read in the line 4397 might have suffered read tearing under some compiler optimisations. The fix eliminated the KCSAN data-race report for this bug. It is yet to be evaluated what happens if tp->cur_tx changes between the test in line 4363 and line 4397. This test should certainly not be cached by the compiler in some register for such a long time, while asynchronous writes to tp->cur_tx might have occurred in line 4254 in the meantime. Fixes: 94d8a98e6235c ("r8169: reduce number of workaround doorbell rings") Cc: Heiner Kallweit <hkallweit1@gmail.com> Cc: nic_swsd@realtek.com Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Paolo Abeni <pabeni@redhat.com> Cc: Marco Elver <elver@google.com> Cc: netdev@vger.kernel.org Link: https://lore.kernel.org/lkml/dc7fc8fa-4ea4-e9a9-30a6-7c83e6b53188@alu.unizg.hr/ Signed-off-by: Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr> Acked-by: Marco Elver <elver@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-19i40e: xsk: remove count_maskMaciej Fijalkowski
Cited commit introduced a neat way of updating next_to_clean that does not require boundary checks on each increment. This was done by masking the new value with (ring length - 1) mask. Problem is that this is applicable only for power of 2 ring sizes, for every other size this assumption can not be made. In turn, it leads to cleaning descriptors out of order as well as splats: [ 1388.411915] Workqueue: events xp_release_deferred [ 1388.411919] RIP: 0010:xp_free+0x1a/0x50 [ 1388.411921] Code: 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 0f 1f 44 00 00 55 48 8b 57 70 48 8d 47 70 48 89 e5 48 39 d0 74 06 <5d> c3 cc cc cc cc 48 8b 57 60 83 82 b8 00 00 00 01 48 8b 57 60 48 [ 1388.411922] RSP: 0018:ffa0000000a83cb0 EFLAGS: 00000206 [ 1388.411923] RAX: ff11000119aa5030 RBX: 000000000000001d RCX: ff110001129b6e50 [ 1388.411924] RDX: ff11000119aa4fa0 RSI: 0000000055555554 RDI: ff11000119aa4fc0 [ 1388.411925] RBP: ffa0000000a83cb0 R08: 0000000000000000 R09: 0000000000000000 [ 1388.411926] R10: 0000000000000001 R11: 0000000000000000 R12: ff11000115829b80 [ 1388.411927] R13: 000000000000005f R14: 0000000000000000 R15: ff11000119aa4fc0 [ 1388.411928] FS: 0000000000000000(0000) GS:ff11000277e00000(0000) knlGS:0000000000000000 [ 1388.411929] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1388.411930] CR2: 00007f1f564e6c14 CR3: 000000000783c005 CR4: 0000000000771ef0 [ 1388.411931] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 1388.411931] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 1388.411932] PKRU: 55555554 [ 1388.411933] Call Trace: [ 1388.411934] <IRQ> [ 1388.411935] ? show_regs+0x6e/0x80 [ 1388.411937] ? watchdog_timer_fn+0x1d2/0x240 [ 1388.411939] ? __pfx_watchdog_timer_fn+0x10/0x10 [ 1388.411941] ? __hrtimer_run_queues+0x10e/0x290 [ 1388.411945] ? clockevents_program_event+0xae/0x130 [ 1388.411947] ? hrtimer_interrupt+0x105/0x240 [ 1388.411949] ? __sysvec_apic_timer_interrupt+0x54/0x150 [ 1388.411952] ? sysvec_apic_timer_interrupt+0x7f/0x90 [ 1388.411955] </IRQ> [ 1388.411955] <TASK> [ 1388.411956] ? asm_sysvec_apic_timer_interrupt+0x1f/0x30 [ 1388.411958] ? xp_free+0x1a/0x50 [ 1388.411960] i40e_xsk_clean_rx_ring+0x5d/0x100 [i40e] [ 1388.411968] i40e_clean_rx_ring+0x14c/0x170 [i40e] [ 1388.411977] i40e_queue_pair_disable+0xda/0x260 [i40e] [ 1388.411986] i40e_xsk_pool_setup+0x192/0x1d0 [i40e] [ 1388.411993] i40e_reconfig_rss_queues+0x1f0/0x1450 [i40e] [ 1388.412002] xp_disable_drv_zc+0x73/0xf0 [ 1388.412004] ? mutex_lock+0x17/0x50 [ 1388.412007] xp_release_deferred+0x2b/0xc0 [ 1388.412010] process_one_work+0x178/0x350 [ 1388.412011] ? __pfx_worker_thread+0x10/0x10 [ 1388.412012] worker_thread+0x2f7/0x420 [ 1388.412014] ? __pfx_worker_thread+0x10/0x10 [ 1388.412015] kthread+0xf8/0x130 [ 1388.412017] ? __pfx_kthread+0x10/0x10 [ 1388.412019] ret_from_fork+0x3d/0x60 [ 1388.412021] ? __pfx_kthread+0x10/0x10 [ 1388.412023] ret_from_fork_asm+0x1b/0x30 [ 1388.412026] </TASK> It comes from picking wrong ring entries when cleaning xsk buffers during pool detach. Remove the count_mask logic and use they boundary check when updating next_to_process (which used to be a next_to_clean). Fixes: c8a8ca3408dc ("i40e: remove unnecessary memory writes of the next to clean pointer") Reported-by: Tushar Vyavahare <tushar.vyavahare@intel.com> Tested-by: Tushar Vyavahare <tushar.vyavahare@intel.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231018163908.40841-1-maciej.fijalkowski@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR. net/mac80211/key.c 02e0e426a2fb ("wifi: mac80211: fix error path key leak") 2a8b665e6bcc ("wifi: mac80211: remove key_mtx") 7d6904bf26b9 ("Merge wireless into wireless-next") https://lore.kernel.org/all/20231012113648.46eea5ec@canb.auug.org.au/ Adjacent changes: drivers/net/ethernet/ti/Kconfig a602ee3176a8 ("net: ethernet: ti: Fix mixed module-builtin object") 98bdeae9502b ("net: cpmac: remove driver to prepare for platform removal") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-19Merge tag 'net-6.6-rc7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bluetooth, netfilter, WiFi. Feels like an up-tick in regression fixes, mostly for older releases. The hfsc fix, tcp_disconnect() and Intel WWAN fixes stand out as fairly clear-cut user reported regressions. The mlx5 DMA bug was causing strife for 390x folks. The fixes themselves are not particularly scary, tho. No open investigations / outstanding reports at the time of writing. Current release - regressions: - eth: mlx5: perform DMA operations in the right locations, make devices usable on s390x, again - sched: sch_hfsc: upgrade 'rt' to 'sc' when it becomes a inner curve, previous fix of rejecting invalid config broke some scripts - rfkill: reduce data->mtx scope in rfkill_fop_open, avoid deadlock - revert "ethtool: Fix mod state of verbose no_mask bitset", needs more work Current release - new code bugs: - tcp: fix listen() warning with v4-mapped-v6 address Previous releases - regressions: - tcp: allow tcp_disconnect() again when threads are waiting, it was denied to plug a constant source of bugs but turns out .NET depends on it - eth: mlx5: fix double-free if buffer refill fails under OOM - revert "net: wwan: iosm: enable runtime pm support for 7560", it's causing regressions and the WWAN team at Intel disappeared - tcp: tsq: relax tcp_small_queue_check() when rtx queue contains a single skb, fix single-stream perf regression on some devices Previous releases - always broken: - Bluetooth: - fix issues in legacy BR/EDR PIN code pairing - correctly bounds check and pad HCI_MON_NEW_INDEX name - netfilter: - more fixes / follow ups for the large "commit protocol" rework, which went in as a fix to 6.5 - fix null-derefs on netlink attrs which user may not pass in - tcp: fix excessive TLP and RACK timeouts from HZ rounding (bless Debian for keeping HZ=250 alive) - net: more strict VIRTIO_NET_HDR_GSO_UDP_L4 validation, prevent letting frankenstein UDP super-frames from getting into the stack - net: fix interface altnames when ifc moves to a new namespace - eth: qed: fix the size of the RX buffers - mptcp: avoid sending RST when closing the initial subflow" * tag 'net-6.6-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (94 commits) Revert "ethtool: Fix mod state of verbose no_mask bitset" selftests: mptcp: join: no RST when rm subflow/addr mptcp: avoid sending RST when closing the initial subflow mptcp: more conservative check for zero probes tcp: check mptcp-level constraints for backlog coalescing selftests: mptcp: join: correctly check for no RST net: ti: icssg-prueth: Fix r30 CMDs bitmasks selftests: net: add very basic test for netdev names and namespaces net: move altnames together with the netdevice net: avoid UAF on deleted altname net: check for altname conflicts when changing netdev's netns net: fix ifname in netlink ntf during netns move net: ethernet: ti: Fix mixed module-builtin object net: phy: bcm7xxx: Add missing 16nm EPHY statistics ipv4: fib: annotate races around nh->nh_saddr_genid and nh->nh_saddr tcp_bpf: properly release resources on error paths net/sched: sch_hfsc: upgrade 'rt' to 'sc' when it becomes a inner curve net: mdio-mux: fix C45 access returning -EIO after API change tcp: tsq: relax tcp_small_queue_check() when rtx queue contains a single skb octeon_ep: update BQL sent bytes before ringing doorbell ...
2023-10-19net: ti: icssg-prueth: Fix r30 CMDs bitmasksMD Danish Anwar
The bitmasks for EMAC_PORT_DISABLE and EMAC_PORT_FORWARD r30 commands are wrong in the driver. Update the bitmasks of these commands to the correct ones as used by the ICSSG firmware. These bitmasks are backwards compatible and work with any ICSSG firmware version. Fixes: e9b4ece7d74b ("net: ti: icssg-prueth: Add Firmware config and classification APIs.") Signed-off-by: MD Danish Anwar <danishanwar@ti.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20231018150715.3085380-1-danishanwar@ti.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-19i40e: Align devlink info versions with ice driver and add docsIvan Vecera
Align devlink info versions with ice driver so change 'fw.mgmt' version to be 2-digit version [major.minor], add 'fw.mgmt.build' that reports mgmt firmware build number and use '"fw.psid.api' for NVM format version instead of incorrect '"fw.psid'. Additionally add missing i40e devlink documentation. Fixes: 5a423552e0d9 ("i40e: Add handler for devlink .info_get") Signed-off-by: Ivan Vecera <ivecera@redhat.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231018123558.552453-1-ivecera@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-19net: stmmac: increase TX coalesce timer to 5msChristian Marangi
Commit 8fce33317023 ("net: stmmac: Rework coalesce timer and fix multi-queue races") decreased the TX coalesce timer from 40ms to 1ms. This caused some performance regression on some target (regression was reported at least on ipq806x) in the order of 600mbps dropping from gigabit handling to only 200mbps. The problem was identified in the TX timer getting armed too much time. While this was fixed and improved in another commit, performance can be improved even further by increasing the timer delay a bit moving from 1ms to 5ms. The value is a good balance between battery saving by prevending too much interrupt to be generated and permitting good performance for internet oriented devices. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-19net: stmmac: move TX timer arm after DMA enableChristian Marangi
Move TX timer arm call after DMA interrupt is enabled again. The TX timer arm function changed logic and now is skipped if a napi is already scheduled. By moving the TX timer arm call after DMA is enabled, we permit to correctly skip if a DMA interrupt has been fired and a napi has been scheduled again. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-19net: stmmac: improve TX timer arm logicChristian Marangi
There is currently a problem with the TX timer getting armed multiple unnecessary times causing big performance regression on some device that suffer from heavy handling of hrtimer rearm. The use of the TX timer is an old implementation that predates the napi implementation and the interrupt enable/disable handling. Due to stmmac being a very old code, the TX timer was never evaluated again with this new implementation and was kept there causing performance regression. The performance regression started to appear with kernel version 4.19 with 8fce33317023 ("net: stmmac: Rework coalesce timer and fix multi-queue races") where the timer was reduced to 1ms causing it to be armed 40 times more than before. Decreasing the timer made the problem more present and caused the regression in the other of 600-700mbps on some device (regression where this was notice is ipq806x). The problem is in the fact that handling the hrtimer on some target is expensive and recent kernel made the timer armed much more times. A solution that was proposed was reverting the hrtimer change and use mod_timer but such solution would still hide the real problem in the current implementation. To fix the regression, apply some additional logic and skip arming the timer when not needed. Arm the timer ONLY if a napi is not already scheduled. Running the timer is redundant since the same function (stmmac_tx_clean) will run in the napi TX poll. Also try to cancel any timer if a napi is scheduled to prevent redundant run of TX call. With the following new logic the original performance are restored while keeping using the hrtimer. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-19net: introduce napi_is_scheduled helperChristian Marangi
We currently have napi_if_scheduled_mark_missed that can be used to check if napi is scheduled but that does more thing than simply checking it and return a bool. Some driver already implement custom function to check if napi is scheduled. Drop these custom function and introduce napi_is_scheduled that simply check if napi is scheduled atomically. Update any driver and code that implement a similar check and instead use this new helper. Signed-off-by: Christian Marangi <ansuelsmth@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-19iavf: delete unused iavf_mac_info fieldsMichal Schmidt
'san_addr' and 'mac_fcoeq' members of struct iavf_mac_info are unused. 'type' is write-only. Delete all three. The function iavf_set_mac_type that sets 'type' also checks if the PCI vendor ID is Intel. This is unnecessary. Delete the whole function. If in the future there's a need for the MAC type (or other PCI ID-dependent data), I would prefer to use .driver_data in iavf_pci_tbl[] for this purpose. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231018111527.78194-1-mschmidt@redhat.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-19net: stmmac: do not silently change auxiliary snapshot capture channelJohannes Zink
Even though the hardware theoretically supports up to 4 simultaneous auxiliary snapshot capture channels, the stmmac driver does support only a single channel to be active at a time. Previously in case of a PTP_CLK_REQ_EXTTS request, previously active auxiliary snapshot capture channels were silently dropped and the new channel was activated. Instead of silently changing the state for all consumers, log an error and return -EBUSY if a channel is already in use in order to signal to userspace to disable the currently active channel before enabling another one. Signed-off-by: Johannes Zink <j.zink@pengutronix.de> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-19net: stmmac: ptp: stmmac_enable(): move change of plat->flags into mutexJohannes Zink
This is a preparation patch. The next patch will check if an external TS is active and return with an error. So we have to move the change of the plat->flags that tracks if external timestamping is enabled after that check. Prepare for this change and move the plat->flags change into the mutex and the if (on). Signed-off-by: Johannes Zink <j.zink@pengutronix.de> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-10-19net: stmmac: intel: remove unnecessary field struct ↵Johannes Zink
plat_stmmacenet_data::ext_snapshot_num Do not store bitmask for enabling AUX_SNAPSHOT0. The previous commit ("net: stmmac: fix PPS capture input index") takes care of calculating the proper bit mask from the request data's extts.index field, which is 0 if not explicitly specified otherwise. Signed-off-by: Johannes Zink <j.zink@pengutronix.de> Signed-off-by: Paolo Abeni <pabeni@redhat.com>