summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2021-10-19ethernet: renesas: use eth_hw_addr_set()Jakub Kicinski
Commit 406f42fa0d3c ("net-next: When a bond have a massive amount of VLANs...") introduced a rbtree for faster Ethernet address look up. To maintain netdev->dev_addr in this tree we need to make all the writes to it got through appropriate helpers. Break the address up into an array on the stack, then call eth_hw_addr_set(). Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19ethernet: r8169: use eth_hw_addr_set()Jakub Kicinski
Commit 406f42fa0d3c ("net-next: When a bond have a massive amount of VLANs...") introduced a rbtree for faster Ethernet address look up. To maintain netdev->dev_addr in this tree we need to make all the writes to it got through appropriate helpers. Read the address into an array on the stack, then call eth_hw_addr_set(). Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19ethernet: netxen: use eth_hw_addr_set()Jakub Kicinski
Commit 406f42fa0d3c ("net-next: When a bond have a massive amount of VLANs...") introduced a rbtree for faster Ethernet address look up. To maintain netdev->dev_addr in this tree we need to make all the writes to it got through appropriate helpers. Invert the address into an array on the stack, then call eth_hw_addr_set(). Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19ethernet: lpc: use eth_hw_addr_set()Jakub Kicinski
Commit 406f42fa0d3c ("net-next: When a bond have a massive amount of VLANs...") introduced a rbtree for faster Ethernet address look up. To maintain netdev->dev_addr in this tree we need to make all the writes to it got through appropriate helpers. Read the address into an array on the stack, then call eth_hw_addr_set(). Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19ethernet: sky2/skge: use eth_hw_addr_set()Jakub Kicinski
Commit 406f42fa0d3c ("net-next: When a bond have a massive amount of VLANs...") introduced a rbtree for faster Ethernet address look up. To maintain netdev->dev_addr in this tree we need to make all the writes to it got through appropriate helpers. Read the address into an array on the stack, then call eth_hw_addr_set(). Signed-off-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19ethernet: mv643xx: use eth_hw_addr_set()Jakub Kicinski
Commit 406f42fa0d3c ("net-next: When a bond have a massive amount of VLANs...") introduced a rbtree for faster Ethernet address look up. To maintain netdev->dev_addr in this tree we need to make all the writes to it got through appropriate helpers. Read the address into an array on the stack, then call eth_hw_addr_set(). Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19RDMA/mlx5: Move struct mlx5_core_mkey to mlx5_ibAharon Landau
Move mlx5_core_mkey struct to mlx5_ib, as the mlx5_core doesn't use it at this point. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-19RDMA/mlx5: Replace struct mlx5_core_mkey by u32 keyAharon Landau
In mlx5_core and vdpa there is no use of mlx5_core_mkey members except for the key itself. As preparation for moving mlx5_core_mkey to mlx5_ib, the occurrences of struct mlx5_core_mkey in all modules except for mlx5_ib are replaced by a u32 key. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-19RDMA/mlx5: Remove pd from struct mlx5_core_mkeyAharon Landau
There is no read of mkey->pd, only write. Remove it. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-19RDMA/mlx5: Remove size from struct mlx5_core_mkeyAharon Landau
mkey->size is already stored in ibmr->length, no need to store it here. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-19RDMA/mlx5: Remove iova from struct mlx5_core_mkeyAharon Landau
iova is already stored in ibmr->iova, no need to store it here. Signed-off-by: Aharon Landau <aharonl@nvidia.com> Reviewed-by: Shay Drory <shayd@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2021-10-19mlxsw: spectrum_qdisc: Make RED, TBF offloads classfulPetr Machata
Permit offloading qdiscs below RED and TBF. In order to avoid having to implement trivial propagating callbacks for get_prio_bitmap and get_tclass_num, extend mlxsw_sp_qdisc_get_prio_bitmap() and ..._get_tclass_num() to handle the lack of the callback as a cue to forward the request to the parent. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19mlxsw: spectrum_qdisc: Validate qdisc topologyPetr Machata
A following patch will enable offloading qdiscs that are deeper than directly under root qdisc. Currently the topology validation consists of demanding a root qdisc position for ETS and PRIO. Since RED and TBF are considered classless, this is enough. In order to prevent some nonsensical combinations when RED and TBF become classful, introduce a more general topology validator. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19mlxsw: spectrum_qdisc: Clean stats recursively when priomap changesPetr Machata
On Spectrum, there are no per-TC TX counters. Instead, mlxsw uses per-prio counters and aggregates them according to the priomap. Therefore when priomap changes, the counter base values need to be reset to reflect the change. Previously, this was only done for the sole child qdisc, but a following patch makes RED and TBF classful. Thus apply the request to the whole sub-tree. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19mlxsw: spectrum_qdisc: Unify graft validationPetr Machata
Qdisc graft operations have so far been reported at PRIO, ETS and RED, with RED events ignored, because RED was not considered a classful qdisc. A following patch will make mlxsw recognize RED and TBF as classful qdiscs, and thus it is necessary to validate grafting at these qdiscs as well. Rename the existing graft validator to make it clear that it is a generic function, and invoke for RED and TBF graft events as well. Drop the unnecessary PRIO helper and invoke the graft validator directly for PRIO as well. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19mlxsw: spectrum_qdisc: Destroy children in mlxsw_sp_qdisc_destroy()Petr Machata
Currently ETS and PRIO are the only offloaded classful qdiscs. Since they are both similar, their destroy handler is the same, and it handles children destruction itself. But now it is possible to do it generically for any classful qdisc. Therefore promote the recursive destruction from the ETS handler to mlxsw_sp_qdisc_destroy(), so that RED and TBF pick it up in follow-up patches. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19mlxsw: spectrum_qdisc: Extract two helpers for handling future FIFOsPetr Machata
Extract from __mlxsw_sp_qdisc_ets_replace() two helpers for handling of one future FIFO resp. reinitializing the array of future FIFOs. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19mlxsw: spectrum_qdisc: Query tclass / priomap instead of caching itPetr Machata
Currently when keeping track of qdiscs, mlxsw notes the TC and priomap corresponding to each qdisc. That is fine currently, as there only ever is one level of qdiscs to update: the direct children of ETS / PRIO. However as deeper structures are made offloadable, ETS would need to update these values for the complete subtree, and interim qdiscs would need to remember to propagate the value. Instead, reverse the responsibility: child qdiscs can ask their parent what their TC and priomap are. ETS / PRIO know the answer right away, or there are defaults for when the root qdisc does not assign them (e.g. when RED is used as root qdisc). When RED and TBF become classful, they will simply forward the request up to their parent. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19Merge tag 'mlx5-updates-2021-10-18' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: mlx5-updates-2021-10-18 Maor Maor Gottlieb says: ======================== Use hash to select the affinity port in VF LAG Current VF LAG architecture is based on QP association with a port. QP must be created after LAG is enabled to allow association with non-native port. VM Packets going on slow-path to eSwicth manager (SW path or hairpin) will be transmitted through a different QP than the VM. This means that Different packets of the same flow might egress from different physical ports. This patch-set solves this issue by moving the port selection to be based on the hash function defined by the bond. When the device is moved to VF LAG mode, the driver creates TTC (traffic type classifier) flow tables in order to classify the packet and steer it to the relevant hash function. Similar to what is done in the mlx5 RSS implementation. Each rule in the TTC table, forwards the packet to port selection flow table which has one hash split flow group which contains two "catch all" flow table entries. Each entry point to the relative uplink port. As shown below: ------------------- | FT | TTC rule -> | ----------- | | FG| FTE --|-|-----> uplink of port #1 | | FTE --|-|-----> uplink of port #2 | ----------- | ------------------- Hash split flow group is flow group that created as type of HASH_SPLIT and associated with match definer. The match definer define the fields which included in the hash calculation. The driver creates the match definer according to the xmit hash policy of the bond driver. Patches overview: ======================== Minor E-Switch updates: - Patch #12, dynamic allocation of dest array - Patch #13, increase number of forward destinations to 32 Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-19Merge branch '40GbE' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Mateusz Palczewski says: ==================== 40GbE Intel Wired LAN Driver Updates 2021-10-18 Use single state machine for driver initialization and for service initialized driver. The init state machine implemented in init_task() is merged into the watchdog_task(). The init_task() function is removed. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-18scsi: ufs: ufs-pci: Force a full restore after suspend-to-diskAdrian Hunter
Implement the ->restore() PM operation and set the link to off, which will force a full reset and restore. This ensures that Host Performance Booster is reset after suspend-to-disk. The Host Performance Booster feature caches logical-to-physical mapping information in the host memory. After suspend-to-disk, such information is not valid, so a full reset and restore is needed. A full reset and restore is done if the SPM level is 5 or 6, but not for other SPM levels, so this change fixes those cases. A full reset and restore also restores base address registers, so that code is removed. Link: https://lore.kernel.org/r/20211018151004.284200-2-adrian.hunter@intel.com Reviewed-by: Avri Altman <avri.altman@wdc.com> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2021-10-18scsi: qla2xxx: Fix unmap of already freed sglDmitry Bogdanov
The sgl is freed in the target stack in target_release_cmd_kref() before calling qlt_free_cmd() but there is an unmap of sgl in qlt_free_cmd() that causes a panic if sgl is not yet DMA unmapped: NIP dma_direct_unmap_sg+0xdc/0x180 LR dma_direct_unmap_sg+0xc8/0x180 Call Trace: ql_dbg_prefix+0x68/0xc0 [qla2xxx] (unreliable) dma_unmap_sg_attrs+0x54/0xf0 qlt_unmap_sg.part.19+0x54/0x1c0 [qla2xxx] qlt_free_cmd+0x124/0x1d0 [qla2xxx] tcm_qla2xxx_release_cmd+0x4c/0xa0 [tcm_qla2xxx] target_put_sess_cmd+0x198/0x370 [target_core_mod] transport_generic_free_cmd+0x6c/0x1b0 [target_core_mod] tcm_qla2xxx_complete_free+0x6c/0x90 [tcm_qla2xxx] The sgl may be left unmapped in error cases of response sending. For instance, qlt_rdy_to_xfer() maps sgl and exits when session is being deleted keeping the sgl mapped. This patch removes use-after-free of the sgl and ensures that the sgl is unmapped for any command that was not sent to firmware. Link: https://lore.kernel.org/r/20211018122650.11846-1-d.bogdanov@yadro.com Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Dmitry Bogdanov <d.bogdanov@yadro.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2021-10-18net/mlx5: E-Switch, Increase supported number of forward destinations to 32Maor Dickman
Increase supported number of forward destinations in the same rule, local and remote, from 2 to 32. Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: E-Switch, Use dynamic alloc for dest arrayMaor Dickman
Use dynamic allocation for the dest array in preparation for the next patch which increase MLX5_MAX_FLOW_FWD_VPORTS and will cause stack allocation to be bigger than 1024 bytes. Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Lag, use steering to select the affinity port in LAGMaor Gottlieb
Use the steering based solution for select the affinity port when the LAG mode is based on hash policy and the device support in port selection flow table. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Lag, add support to create/destroy/modify port selectionMaor Gottlieb
Add create function, build the steering tables, TTC and definers according to the LAG hash type. The destroy function, destroys all the steering components. The modify functions is used when the bond mapping changes and it iterates over all the rules in the definers and modifies them to steer the packet to the relevant active ports. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Lag, add support to create TTC tables for LAG port selectionMaor Gottlieb
Add support to create inner and outer TTC tables for LAG port selection. These tables are used to classify the packets in order to select the related definer. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Lag, add support to create definers for LAGMaor Gottlieb
Every definer will consist of a flow table with a single hash group with exactly two flow table entries, one for each device port. The destination of these entries is the uplink vport according to the port state and hash policy. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Lag, set match mask according to the traffic type bitmapMaor Gottlieb
Set the related bits in the match definer mask according to the TT mapping. This mask will be used to create the match definers. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Lag, set LAG traffic type mappingMaor Gottlieb
Generate a traffic type bitmap that will define which steering objects we need to create for the steering based LAG. Bits in this bitmap are set according to the LAG hash type. In addition, have a field that indicate if the lag is in encap mode or not. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Lag, move lag files into directoryMaor Gottlieb
Downstream patches add another lag related file so it makes sense to have all the lag files in a dedicated directory. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Introduce new uplink destination typeMaor Gottlieb
The uplink destination type should be used in rules to steer the packet to the uplink when the device is in steering based LAG mode. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Add support to create match definerMaor Gottlieb
Introduce new APIs to create and destroy flow matcher for given format id. Flow match definer object is used for defining the fields and mask used for the hash calculation. User should mask the desired fields like done in the match criteria. This object is assigned to flow group of type hash. In this flow group type, packets lookup is done based on the hash result. This patch also adds the required bits to create such flow group. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Introduce port selection namespaceMaor Gottlieb
Add new port selection flow steering namespace. Flow steering rules in this namespaceare are used to determine the physical port for egress packets. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18net/mlx5: Support partial TTC rulesMaor Gottlieb
Add bitmasks to ttc_params to indicate if rule is valid or not. It will allow to create TTC table with support only in part of the traffic types. In later patches which introduce the steering based LAG port selection, TTC will be created with only part of the rules according to the hash type. Signed-off-by: Maor Gottlieb <maorg@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2021-10-18scsi: qla2xxx: Fix a memory leak in an error path of qla2x00_process_els()Joy Gu
Commit 8c0eb596baa5 ("[SCSI] qla2xxx: Fix a memory leak in an error path of qla2x00_process_els()"), intended to change: bsg_job->request->msgcode == FC_BSG_HST_ELS_NOLOGIN to: bsg_job->request->msgcode != FC_BSG_RPT_ELS but changed it to: bsg_job->request->msgcode == FC_BSG_RPT_ELS instead. Change the == to a != to avoid leaking the fcport structure or freeing unallocated memory. Link: https://lore.kernel.org/r/20211012191834.90306-2-jgu@purestorage.com Fixes: 8c0eb596baa5 ("[SCSI] qla2xxx: Fix a memory leak in an error path of qla2x00_process_els()") Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Joy Gu <jgu@purestorage.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2021-10-18scsi: qla2xxx: Return -ENOMEM if kzalloc() failsZheyu Ma
The driver probing function should return < 0 for failure, otherwise kernel will treat value > 0 as success. Link: https://lore.kernel.org/r/1634522181-31166-1-git-send-email-zheyuma97@gmail.com Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Zheyu Ma <zheyuma97@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2021-10-18qed: Change the TCP common variable - "iscsi_ooo"Shai Malin
Change the TCP common variable - "iscsi_ooo" to "ooo_opq". This variable is common between all the TCP L5 protocols and not specific to iSCSI. Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: Shai Malin <smalin@marvell.com> Link: https://lore.kernel.org/r/20211015124118.29041-2-smalin@marvell.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-18qed: Optimize the ll2 ooo flowShai Malin
Optimize the ll2 TCP out-of-order likely flows: - Optimize the non-error flows of the ll2 ooo data path. - Optimize "QED_OOO_RIGHT_BUF" over "QED_OOO_LEFT_BUF". Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: Shai Malin <smalin@marvell.com> Link: https://lore.kernel.org/r/20211015124118.29041-1-smalin@marvell.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-18nfp: bpf: silence bitwise vs. logical OR warningNathan Chancellor
A new warning in clang points out two places in this driver where boolean expressions are being used with a bitwise OR instead of a logical one: drivers/net/ethernet/netronome/nfp/nfp_asm.c:199:20: error: use of bitwise '|' with boolean operands [-Werror,-Wbitwise-instead-of-logical] reg->src_lmextn = swreg_lmextn(lreg) | swreg_lmextn(rreg); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ || drivers/net/ethernet/netronome/nfp/nfp_asm.c:199:20: note: cast one or both operands to int to silence this warning drivers/net/ethernet/netronome/nfp/nfp_asm.c:280:20: error: use of bitwise '|' with boolean operands [-Werror,-Wbitwise-instead-of-logical] reg->src_lmextn = swreg_lmextn(lreg) | swreg_lmextn(rreg); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ || drivers/net/ethernet/netronome/nfp/nfp_asm.c:280:20: note: cast one or both operands to int to silence this warning 2 errors generated. The motivation for the warning is that logical operations short circuit while bitwise operations do not. In this case, it does not seem like short circuiting is harmful so implement the suggested fix of changing to a logical operation to fix the warning. Link: https://github.com/ClangBuiltLinux/linux/issues/1479 Reported-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20211018193101.2340261-1-nathan@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-18drm/msm/devfreq: Restrict idle clamping to a618 for nowRob Clark
Until we better understand the stability issues caused by frequent frequency changes, lets limit them to a618. Signed-off-by: Rob Clark <robdclark@chromium.org> Tested-by: John Stultz <john.stultz@linaro.org> Tested-by: Caleb Connolly <caleb.connolly@linaro.org> Link: https://lore.kernel.org/r/20211018153627.2787882-1-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-10-18iavf: Combine init and watchdog state machinesMateusz Palczewski
Use single state machine for driver initialization and for service initialized driver. The init state machine implemented in init_task() is merged into the watchdog_task(). The init_task() function is removed. Signed-off-by: Jakub Pawlak <jakub.pawlak@intel.com> Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com> Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-10-18iavf: Add __IAVF_INIT_FAILED stateMateusz Palczewski
This commit adds a new state, __IAVF_INIT_FAILED to the state machine. From now on initialization functions report errors not by returning an error value, but by changing the state to indicate that something went wrong. Signed-off-by: Jakub Pawlak <jakub.pawlak@intel.com> Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com> Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-10-18iavf: Refactor iavf state machine trackingMateusz Palczewski
Replace state changes of iavf state machine with a method that also tracks the previous state the machine was on. This change is required for further work with refactoring init and watchdog state machines. Tracking of previous state would help us recover iavf after failure has occurred. Signed-off-by: Jakub Pawlak <jakub.pawlak@intel.com> Signed-off-by: Jan Sokolowski <jan.sokolowski@intel.com> Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-10-18mlx5: prevent 64bit divideJakub Kicinski
mlx5_tout_ms() returns a u64, we can't directly divide it. This is not a problem here, @timeout which is the value that actually matters here is already a ulong, so this implies storing return value of mlx5_tout_ms() on a ulong should be fine. This fixes: ERROR: modpost: "__udivdi3" [drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko] undefined! Fixes: 32def4120e48 ("net/mlx5: Read timeout values from DTOR") Link: https://lore.kernel.org/r/20211018172608.1069754-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-10-18sfc: Fix reading non-legacy supported link modesErik Ekman
Everything except the first 32 bits was lost when the pause flags were added. This makes the 50000baseCR2 mode flag (bit 34) not appear. I have tested this with a 10G card (SFN5122F-R7) by modifying it to return a non-legacy link mode (10000baseCR). Signed-off-by: Erik Ekman <erik@kryo.se> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-18net: dsa: qca8k: fix delay applied to wrong cpu in parse_port_configAnsuel Smith
Fix delay settings applied to wrong cpu in parse_port_config. The delay values is set to the wrong index as the cpu_port_index is incremented too early. Start the cpu_port_index to -1 so the correct value is applied to address also the case with invalid phy mode and not available port. Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-18Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-nextDavid S. Miller
Pablo Neira Ayuso says: ==================== Netfilter/IPVS updates for net-next The following patchset contains Netfilter/IPVS for net-next: 1) Add new run_estimation toggle to IPVS to stop the estimation_timer logic, from Dust Li. 2) Relax superfluous dynset check on NFT_SET_TIMEOUT. 3) Add egress hook, from Lukas Wunner. 4) Nowadays, almost all hook functions in x_table land just call the hook evaluation loop. Remove remaining hook wrappers from iptables and IPVS. From Florian Westphal. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-18net: phy: realtek: add support for RTL8365MB-VC internal PHYsAlvin Šipraga
The RTL8365MB-VC ethernet switch controller has 4 internal PHYs for its user-facing ports. All that is needed is to let the PHY driver core pick up the IRQ made available by the switch driver. Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-18net: dsa: realtek-smi: add rtl8365mb subdriver for RTL8365MB-VCAlvin Šipraga
This patch adds a realtek-smi subdriver for the RTL8365MB-VC 4+1 port 10/100/1000M switch controller. The driver has been developed based on a GPL-licensed OS-agnostic Realtek vendor driver known as rtl8367c found in the OpenWrt source tree. Despite the name, the RTL8365MB-VC has an entirely different register layout to the already-supported RTL8366RB ASIC. Notwithstanding this, the structure of the rtl8365mb subdriver is loosely based on the rtl8366rb subdriver. Like the 'rb, it establishes its own irqchip to handle cascaded PHY link status interrupts. The RTL8365MB-VC switch is capable of offloading a large number of features from the software, but this patch introduces only the most basic DSA driver functionality. The ports always function as standalone ports, with bridging handled in software. One more thing. Realtek's nomenclature for switches makes it hard to know exactly what other ASICs might be supported by this driver. The vendor driver goes by the name rtl8367c, but as far as I can tell, no chip actually exists under this name. As such, the subdriver is named rtl8365mb to emphasize the potentially limited support. But it is clear from the vendor sources that a number of other more advanced switches share a similar register layout, and further support should not be too hard to add given access to the relevant hardware. With this in mind, the subdriver has been written with as few assumptions about the particular chip as is reasonable. But the RTL8365MB-VC is the only hardware I have available, so some further work is surely needed. Co-developed-by: Michael Rasmussen <mir@bang-olufsen.dk> Signed-off-by: Michael Rasmussen <mir@bang-olufsen.dk> Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Arınç ÜNAL <arinc.unal@arinc9.com> Signed-off-by: David S. Miller <davem@davemloft.net>