summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2022-01-27mlxsw: spectrum: Guard against invalid local portsAmit Cohen
When processing events generated by the device's firmware, the driver protects itself from events reported for non-existent local ports, but not for the CPU port (local port 0), which exists, but does not have all the fields as any local port. This can result in a NULL pointer dereference when trying access 'struct mlxsw_sp_port' fields which are not initialized for CPU port. Commit 63b08b1f6834 ("mlxsw: spectrum: Protect driver from buggy firmware") already handled such issue by bailing early when processing a PUDE event reported for the CPU port. Generalize the approach by moving the check to a common function and making use of it in all relevant places. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: core: Consolidate trap groups to a single event groupJiri Pirko
For event traps which are used in core, avoid having a separate trap group for each event. Instead of that introduce a single core event trap group and use it for all event traps. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: core: Move functions to register/unregister array of traps to core.cJiri Pirko
These functions belong to core.c alongside the functions that register/unregister a single trap. Move it there. Make the functions possibly usable by other parts of mlxsw code. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: core: Move basic trap group initialization from spectrum.cJiri Pirko
Instead of initializing the trap groups used by core in spectrum.c over op, do it directly in core.c Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: core: Move basic_trap_groups_set() call out of EMAD init codeJiri Pirko
The call inits the EMAD group, but other groups as well. Therefore, move it out of EMAD init code and call it before. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27mlxsw: spectrum: Set basic trap groups from an arrayJiri Pirko
Instead of calling the same code four times, do it in a loop over array which contains trap grups to be set. Signed-off-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27net: amd-xgbe: ensure to reset the tx_timer_active flagRaju Rangoju
Ensure to reset the tx_timer_active flag in xgbe_stop(), otherwise a port restart may result in tx timeout due to uncleared flag. Fixes: c635eaacbf77 ("amd-xgbe: Remove Tx coalescing") Co-developed-by: Sudheesh Mavila <sudheesh.mavila@amd.com> Signed-off-by: Sudheesh Mavila <sudheesh.mavila@amd.com> Signed-off-by: Raju Rangoju <Raju.Rangoju@amd.com> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20220127060222.453371-1-Raju.Rangoju@amd.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27Merge tag 'mlx5-updates-2022-01-27' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2022-01-27 1) Dima, adds an internal mlx5 steering callback per steering provider (FW vs SW steering), to advertise steering capabilities implemented by each module, this helps upper modules in mlx5 to know what is supported and what's not without the need to tell what is the underlying steering mode. 2nd patch is the usecase where this interface is used to implement Vlan Push/pop for uplink with SW steering, where in FW mode it's not supported yet. 2) Roi Dayan improves code readability and maintainability as preparation step for multi attribute instance per flow in mlx5 TC module Currently the mlx5_flow object contains a single mlx5_attr instance. However, multi table actions (e.g. CT) instantiate multiple attr instances. This is a refactoring series in a preparation to support multiple attribute instances per flow. The commits prepare functions to get attr instance instead of using flow->attr and also using attr->flags if the flag is more relevant to be attr flag and not a flow flag considering there will be multiple attr instances. i.e. CT and SAMPLE flags. * tag 'mlx5-updates-2022-01-27' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: net/mlx5: VLAN push on RX, pop on TX net/mlx5: Introduce software defined steering capabilities net/mlx5: Remove unused TIR modify bitmask enums net/mlx5e: CT, Remove redundant flow args from tc ct calls net/mlx5e: TC, Store mapped tunnel id on flow attr net/mlx5e: Test CT and SAMPLE on flow attr net/mlx5e: Refactor eswitch attr flags to just attr flags net/mlx5e: CT, Don't set flow flag CT for ct clear flow net/mlx5e: TC, Hold sample_attr on stack instead of pointer net/mlx5e: TC, Reject rules with multiple CT actions net/mlx5e: TC, Refactor mlx5e_tc_add_flow_mod_hdr() to get flow attr net/mlx5e: TC, Pass attr to tc_act can_offload() net/mlx5e: TC, Split pedit offloads verify from alloc_tc_pedit_action() net/mlx5e: TC, Move pedit_headers_action to parse_attr net/mlx5e: Move counter creation call to alloc_flow_attr_counter() net/mlx5e: Pass attr arg for attaching/detaching encaps net/mlx5e: Move code chunk setting encap dests into its own function ==================== Link: https://lore.kernel.org/r/20220127204007.146300-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27Merge branch '1GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== 1GbE Intel Wired LAN Driver Updates 2022-01-27 Christophe Jaillet removes useless DMA-32 fallback calls from applicable Intel drivers and simplifies code as a result of the removal. * '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue: igbvf: Remove useless DMA-32 fallback configuration igb: Remove useless DMA-32 fallback configuration igc: Remove useless DMA-32 fallback configuration ice: Remove useless DMA-32 fallback configuration iavf: Remove useless DMA-32 fallback configuration e1000e: Remove useless DMA-32 fallback configuration i40e: Remove useless DMA-32 fallback configuration ixgbevf: Remove useless DMA-32 fallback configuration ixgbe: Remove useless DMA-32 fallback configuration ixgb: Remove useless DMA-32 fallback configuration ==================== Link: https://lore.kernel.org/r/20220127215224.422113-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27net: amd-xgbe: Fix skb data length underflowShyam Sundar S K
There will be BUG_ON() triggered in include/linux/skbuff.h leading to intermittent kernel panic, when the skb length underflow is detected. Fix this by dropping the packet if such length underflows are seen because of inconsistencies in the hardware descriptors. Fixes: 622c36f143fc ("amd-xgbe: Fix jumbo MTU processing on newer hardware") Suggested-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20220127092003.2812745-1-Shyam-sundar.S-k@amd.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27drm/kmb: Fix for build errors with Warray-boundsAnitha Chrisanthus
This fixes the following build error drivers/gpu/drm/kmb/kmb_plane.c: In function 'kmb_plane_atomic_disable': drivers/gpu/drm/kmb/kmb_plane.c:165:34: error: array subscript 3 is above array bounds of 'struct layer_status[2]' [-Werror=array-bounds] 165 | kmb->plane_status[plane_id].ctrl = LCD_CTRL_GL2_ENABLE; | ~~~~~~~~~~~~~~~~~^~~~~~~~~~ In file included from drivers/gpu/drm/kmb/kmb_plane.c:17: drivers/gpu/drm/kmb/kmb_drv.h:61:41: note: while referencing 'plane_status' 61 | struct layer_status plane_status[KMB_MAX_PLANES]; | ^~~~~~~~~~~~ drivers/gpu/drm/kmb/kmb_plane.c:162:34: error: array subscript 2 is above array bounds of 'struct layer_status[2]' [-Werror=array-bounds] 162 | kmb->plane_status[plane_id].ctrl = LCD_CTRL_GL1_ENABLE; | ~~~~~~~~~~~~~~~~~^~~~~~~~~~ In file included from drivers/gpu/drm/kmb/kmb_plane.c:17: drivers/gpu/drm/kmb/kmb_drv.h:61:41: note: while referencing 'plane_status' 61 | struct layer_status plane_status[KMB_MAX_PLANES]; | ^~~~~~~~~~~~ Fixes: 7f7b96a8a0a1 ("drm/kmb: Add support for KeemBay Display") Signed-off-by: Anitha Chrisanthus <anitha.chrisanthus@intel.com> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://patchwork.freedesktop.org/patch/msgid/20220127194227.2213608-1-anitha.chrisanthus@intel.com
2022-01-27Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-01-27net/mlx5: VLAN push on RX, pop on TXDima Chumak
Some older NIC hardware isn't capable of doing VLAN push on RX and pop on TX. A workaround has been added in software to support it, but it has a performance penalty since it requires a hairpin + loopback. There's no such limitation with the newer NICs, so no need to pay the price of the w/a. With this change the software w/a is disabled for certain HW versions and steering modes that support it. Signed-off-by: Dima Chumak <dchumak@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5: Introduce software defined steering capabilitiesDima Chumak
There are two different internal steering modes, abstracted from the rest of the driver. In order to keep upper layer of the driver agnostic to the differences in capabilities of the steering modes, this patch introduces mlx5_fs_get_capabilities() API to check if a certain software defined capability is supported. It differs from the capabilities exposed by the hardware, as it takes into account the flow steering mode (SMFS/DMFS) currently enabled. This implementation supports only two capability flags: MLX5_FLOW_STEERING_CAP_VLAN_PUSH_ON_RX MLX5_FLOW_STEERING_CAP_VLAN_POP_ON_TX They map to DR_ACTION_STATE_PUSH_VLAN and DR_ACTION_STATE_POP_VLAN actions, implemented in SW steering earlier in commit f5e22be534e0 ("net/mlx5: DR, Split modify VLAN state to separate pop/push states"). Which enables using of pop/push vlan without restrictions, e.g. doing vlan pop on TX and RX, compared to FW steering that supports only vlan pop on RX and push on TX. Other capabilities can be added in the future. Signed-off-by: Dima Chumak <dchumak@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: CT, Remove redundant flow args from tc ct callsRoi Dayan
The flow arg is not being used so remove it. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Store mapped tunnel id on flow attrRoi Dayan
In preparation for multiple attr instances the tunnel_id should be attr specific and not flow specific. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: Test CT and SAMPLE on flow attrRoi Dayan
Currently the mlx5_flow object contains a single mlx5_attr instance. However, multi table actions (e.g. CT) instantiate multiple attr instances. Prepare for multiple attr instances by testing for CT or SAMPLE flag on attr flags instead of flow flag. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Chris Mi <cmi@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: Refactor eswitch attr flags to just attr flagsRoi Dayan
The flags are flow attrs and not esw specific attr flags. Refactor to remove the esw prefix and move from eswitch.h to en_tc.h where struct mlx5_flow_attr exists. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Vlad Buslov <vladbu@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: CT, Don't set flow flag CT for ct clear flowRoi Dayan
ct clear action is a normal flow with a modify header for registers to 0. there is no need for any special handling in tc_ct.c. Parsing of ct clear action still allocates mod acts to set 0 on the registers and the driver continue to add a normal rule with modify hdr context. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Hold sample_attr on stack instead of pointerRoi Dayan
In later commit we are going to instantiate multiple attr instances for flow instead of single attr. Parsing TC sample allocates a new memory but there is no symmetric cleanup in the infrastructure. To avoid asymmetric alloc/free use sample_attr as part of the flow attr and not allocated and held as a pointer. This will avoid a cleanup leak when sample action is not on the first attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Reject rules with multiple CT actionsRoi Dayan
The driver doesn't support multiple CT actions. Multiple CT clear actions are ok as they are redundant also with another CT actions. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Refactor mlx5e_tc_add_flow_mod_hdr() to get flow attrRoi Dayan
In later commit we are going to instantiate multiple attr instances for flow instead of single attr. Make sure mlx5e_tc_add_flow_mod_hdr() use the correct attr and not flow->attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Pass attr to tc_act can_offload()Roi Dayan
In later commit we are going to instantiate multiple attr instances for flow instead of single attr. Make sure the parsing using correct attr and not flow->attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Split pedit offloads verify from alloc_tc_pedit_action()Roi Dayan
Split pedit verify part into a new subfunction for better maintainability. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: TC, Move pedit_headers_action to parse_attrRoi Dayan
Move pedit_headers_action from flow parse_state to flow parse_attr. In a follow up commit we are going to have multiple attr per flow and pedit_headers_action are unique per attr. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: Move counter creation call to alloc_flow_attr_counter()Roi Dayan
Move shared code to alloc_flow_attr_counter() for reuse by the next patches. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: Pass attr arg for attaching/detaching encapsRoi Dayan
In later commit that we will have multiple attr instances per flow we would like to pass a specific attr instance to set encaps. Currently the mlx5_flow object contains a single mlx5_attr instance. However, multi table actions (e.g. CT) instantiate multiple attr instances. Currently mlx5e_attach/detach_encap() reads the first attr instance from the flow instance. Modify the functions to receive the attr instance as a parameter which is set by the calling function. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27net/mlx5e: Move code chunk setting encap dests into its own functionRoi Dayan
Split setting encap dests code chunk out of mlx5e_tc_add_fdb_flow() to make the function smaller for maintainability and reuse. For symmetry do the same for mlx5e_tc_del_fdb_flow(). While at it refactor cleanup to first check for encap flag like done when setting encap dests. Signed-off-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-01-27Merge tag 'net-5.17-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from netfilter and can. Current release - new code bugs: - tcp: add a missing sk_defer_free_flush() in tcp_splice_read() - tcp: add a stub for sk_defer_free_flush(), fix CONFIG_INET=n - nf_tables: set last expression in register tracking area - nft_connlimit: fix memleak if nf_ct_netns_get() fails - mptcp: fix removing ids bitmap setting - bonding: use rcu_dereference_rtnl when getting active slave - fix three cases of sleep in atomic context in drivers: lan966x, gve - handful of build fixes for esoteric drivers after netdev->dev_addr was made const Previous releases - regressions: - revert "ipv6: Honor all IPv6 PIO Valid Lifetime values", it broke Linux compatibility with USGv6 tests - procfs: show net device bound packet types - ipv4: fix ip option filtering for locally generated fragments - phy: broadcom: hook up soft_reset for BCM54616S Previous releases - always broken: - ipv4: raw: lock the socket in raw_bind() - ipv4: decrease the use of shared IPID generator to decrease the chance of attackers guessing the values - procfs: fix cross-netns information leakage in /proc/net/ptype - ethtool: fix link extended state for big endian - bridge: vlan: fix single net device option dumping - ping: fix the sk_bound_dev_if match in ping_lookup" * tag 'net-5.17-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (86 commits) net: bridge: vlan: fix memory leak in __allowed_ingress net: socket: rename SKB_DROP_REASON_SOCKET_FILTER ipv4: remove sparse error in ip_neigh_gw4() ipv4: avoid using shared IP generator for connected sockets ipv4: tcp: send zero IPID in SYNACK messages ipv4: raw: lock the socket in raw_bind() MAINTAINERS: add missing IPv4/IPv6 header paths MAINTAINERS: add more files to eth PHY net: stmmac: dwmac-sun8i: use return val of readl_poll_timeout() net: bridge: vlan: fix single net device option dumping net: stmmac: skip only stmmac_ptp_register when resume from suspend net: stmmac: configure PTP clock source prior to PTP initialization Revert "ipv6: Honor all IPv6 PIO Valid Lifetime values" connector/cn_proc: Use task_is_in_init_pid_ns() pid: Introduce helper task_is_in_init_pid_ns() gve: Fix GFP flags when allocing pages net: lan966x: Fix sleep in atomic context when updating MAC table net: lan966x: Fix sleep in atomic context when injecting frames ethernet: seeq/ether3: don't write directly to netdev->dev_addr ethernet: 8390/etherh: don't write directly to netdev->dev_addr ...
2022-01-27igbvf: Remove useless DMA-32 fallback configurationChristophe JAILLET
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27igb: Remove useless DMA-32 fallback configurationChristophe JAILLET
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27igc: Remove useless DMA-32 fallback configurationChristophe JAILLET
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27ice: Remove useless DMA-32 fallback configurationChristophe JAILLET
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27iavf: Remove useless DMA-32 fallback configurationChristophe JAILLET
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27e1000e: Remove useless DMA-32 fallback configurationChristophe JAILLET
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27i40e: Remove useless DMA-32 fallback configurationChristophe JAILLET
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27ixgbevf: Remove useless DMA-32 fallback configurationChristophe JAILLET
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27ixgbe: Remove useless DMA-32 fallback configurationChristophe JAILLET
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27ixgb: Remove useless DMA-32 fallback configurationChristophe JAILLET
As stated in [1], dma_set_mask() with a 64-bit mask never fails if dev->dma_mask is non-NULL. So, if it fails, the 32 bits case will also fail for the same reason. So, if dma_set_mask_and_coherent() succeeds, 'pci_using_dac' is known to be 1. Simplify code and remove some dead code accordingly. [1]: https://lkml.org/lkml/2021/6/7/398 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-01-27ice: xsk: Borrow xdp_tx_active logic from i40eMaciej Fijalkowski
One of the things that commit 5574ff7b7b3d ("i40e: optimize AF_XDP Tx completion path") introduced was the @xdp_tx_active field. Its usage from i40e can be adjusted to ice driver and give us positive performance results. If the descriptor that @next_dd points to has been sent by HW (its DD bit is set), then we are sure that at least quarter of the ring is ready to be cleaned. If @xdp_tx_active is 0 which means that related xdp_ring is not used for XDP_{TX, REDIRECT} workloads, then we know how many XSK entries should placed to completion queue, IOW walking through the ring can be skipped. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/bpf/20220125160446.78976-9-maciej.fijalkowski@intel.com
2022-01-27ice: xsk: Improve AF_XDP ZC Tx and use batching APIMaciej Fijalkowski
Apply the logic that was done for regular XDP from commit 9610bd988df9 ("ice: optimize XDP_TX workloads") to the ZC side of the driver. On top of that, introduce batching to Tx that is inspired by i40e's implementation with adjustments to the cleaning logic - take into the account NAPI budget in ice_clean_xdp_irq_zc(). Separating the stats structs onto separate cache lines seemed to improve the performance. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20220125160446.78976-8-maciej.fijalkowski@intel.com
2022-01-27ice: xsk: Avoid potential dead AF_XDP Tx processingMaciej Fijalkowski
Commit 9610bd988df9 ("ice: optimize XDP_TX workloads") introduced @next_dd and @next_rs to ice_tx_ring struct. Currently, their state is not restored in ice_clean_tx_ring(), which was not causing any troubles as the XDP rings are gone after we're done with XDP prog on interface. For upcoming usage of mentioned fields in AF_XDP, this might expose us to a potential dead Tx side. Scenario would look like following (based on xdpsock): - two xdpsock instances are spawned in Tx mode - one of them is killed - XDP prog is kept on interface due to the other xdpsock still running * this means that XDP rings stayed in place - xdpsock is launched again on same queue id that was terminated on - @next_dd and @next_rs setting is bogus, therefore transmit side is broken To protect us from the above, restore the initial @next_rs and @next_dd values when cleaning the Tx ring. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/bpf/20220125160446.78976-7-maciej.fijalkowski@intel.com
2022-01-27i40e: xsk: Move tmp desc array from driver to poolMagnus Karlsson
Move desc_array from the driver to the pool. The reason behind this is that we can then reuse this array as a temporary storage for descriptors in all zero-copy drivers that use the batched interface. This will make it easier to add batching to more drivers. i40e is the only driver that has a batched Tx zero-copy implementation, so no need to touch any other driver. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20220125160446.78976-6-maciej.fijalkowski@intel.com
2022-01-27ice: Make Tx threshold dependent on ring lengthMaciej Fijalkowski
XDP_TX workloads use a concept of Tx threshold that indicates the interval of setting RS bit on descriptors which in turn tells the HW to generate an interrupt to signal the completion of Tx on HW side. It is currently based on a constant value of 32 which might not work out well for various sizes of ring combined with for example batch size that can be set via SO_BUSY_POLL_BUDGET. Internal tests based on AF_XDP showed that most convenient setup of mentioned threshold is when it is equal to quarter of a ring length. Make use of recently introduced ICE_RING_QUARTER macro and use this value as a substitute for ICE_TX_THRESH. Align also ethtool -G callback so that next_dd/next_rs fields are up to date in terms of the ring size. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/bpf/20220125160446.78976-5-maciej.fijalkowski@intel.com
2022-01-27ice: xsk: Handle SW XDP ring wrap and bump tail more oftenMaciej Fijalkowski
Currently, if ice_clean_rx_irq_zc() processed the whole ring and next_to_use != 0, then ice_alloc_rx_buf_zc() would not refill the whole ring even if the XSK buffer pool would have enough free entries (either from fill ring or the internal recycle mechanism) - it is because ring wrap is not handled. Improve the logic in ice_alloc_rx_buf_zc() to address the problem above. Do not clamp the count of buffers that is passed to xsk_buff_alloc_batch() in case when next_to_use + buffer count >= rx_ring->count, but rather split it and have two calls to the mentioned function - one for the part up until the wrap and one for the part after the wrap. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20220125160446.78976-4-maciej.fijalkowski@intel.com
2022-01-27ice: xsk: Force rings to be sized to power of 2Maciej Fijalkowski
With the upcoming introduction of batching to XSK data path, performance wise it will be the best to have the ring descriptor count to be aligned to power of 2. Check if ring sizes that user is going to attach the XSK socket fulfill the condition above. For Tx side, although check is being done against the Tx queue and in the end the socket will be attached to the XDP queue, it is fine since XDP queues get the ring->count setting from Tx queues. Suggested-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/bpf/20220125160446.78976-3-maciej.fijalkowski@intel.com
2022-01-27ice: Remove likely for napi_complete_doneMaciej Fijalkowski
Remove the likely before napi_complete_done as this is the unlikely case when busy-poll is used. Removing this has a positive performance impact for busy-poll and no negative impact to the regular case. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/bpf/20220125160446.78976-2-maciej.fijalkowski@intel.com
2022-01-27drm/vmwgfx: Fix stale file descriptors on failed usercopyMathias Krause
A failing usercopy of the fence_rep object will lead to a stale entry in the file descriptor table as put_unused_fd() won't release it. This enables userland to refer to a dangling 'file' object through that still valid file descriptor, leading to all kinds of use-after-free exploitation scenarios. Fix this by deferring the call to fd_install() until after the usercopy has succeeded. Fixes: c906965dee22 ("drm/vmwgfx: Add export fence to file descriptor support") Signed-off-by: Mathias Krause <minipli@grsecurity.net> Signed-off-by: Zack Rusin <zackr@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-01-27ptp: replace snprintf with sysfs_emitYang Guang
coccinelle report: ./drivers/ptp/ptp_sysfs.c:17:8-16: WARNING: use scnprintf or sprintf ./drivers/ptp/ptp_sysfs.c:390:8-16: WARNING: use scnprintf or sprintf Use sysfs_emit instead of scnprintf or sprintf makes more sense. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Yang Guang <yang.guang5@zte.com.cn> Signed-off-by: David Yang <davidcomponentone@gmail.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-01-27r8169: enable ASPM L1.2 if system vendor flags it as safeHeiner Kallweit
On some systems there are compatibility issues with ASPM L1.2 and RTL8125, therefore this state is disabled per default. To allow for the L1.2 power saving on not affected systems, Realtek provides vendors that successfully tested ASPM L1.2 the option to flag this state as safe. According to Realtek this flag will be set first on certain Chromebox devices. Suggested-by: Chun-Hao Lin <hau@realtek.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>