summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-09-30net/mlx5e: Rename mlx5e_dma_info to prepare for removal of DMA addressMaxim Mikityanskiy
The next commit will remove the DMA address from the struct currently called mlx5e_dma_info, because the same value can be retrieved with page_pool_get_dma_addr(page) in almost all cases, with the notable exception of SHAMPO (HW GRO implementation) that modifies this address on the fly, after the initial allocation. To keep the SHAMPO logic intact, struct mlx5e_dma_info remains in the SHAMPO code, consisting of addr and page (XSK is not compatible with SHAMPO). The struct used in all other places is renamed to mlx5e_alloc_unit, allowing the next commit to remove the addr field without affecting SHAMPO. The new name means "allocation unit", and it's more appropriate after the field with the DMA address gets removed. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Optimize the page cache reducing its size 2xMaxim Mikityanskiy
RX page cache stores dma_info structs, that consist of a pointer to struct page and a DMA address. In fact, the DMA address is extracted from struct page using page_pool_get_dma_addr when a page is pushed to the cache. By moving this call to the point when a page is popped from the cache, we can avoid storing the DMA address in the cache, effectively reducing its size by two times without losing any functionality. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Fix calculations for ICOSQ sizeMaxim Mikityanskiy
WQEs must not cross page boundaries, they are padded with NOPs if they don't fit the page. mlx5e_mpwrq_total_umr_wqebbs doesn't take into account this padding, risking reserving not enough space. The padding is not straightforward to add to this calculation, because WQEs of different sizes may be mixed together in the queue. If each page ends with a big WQE that doesn't fit and requires at most its size minus 1 WQEBB of padding, the total space can be much bigger than in case when smaller WQEs take advantage of this padding. Replace the wrong exact calculation by the following estimation. Each padding can be at most the size of the maximum WQE used in the queue minus one WQEBB. Let's call the rest of the page "useful space". If we divide the total size of all needed WQEs by this useful space, rounding up, we'll get the number of pages, which is enough to contain all these WQEs. It's correct, because every WQE that appeared on the boundary between two blocks of useful space would start in the useful space of one page and end in the padding of the same page, while our estimation reserved space for its tail in the next space, making the estimation not smaller than the real space occupied in the queue. The code actually uses a looser estimation: instead of taking the maximum size of all used WQE types minus 1 WQEBB, it takes the maximum hardware size of a WQE. It's made for simplicity and extensibility. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30xsk: Remove unused xsk_buff_discardMaxim Mikityanskiy
The previous commit removed the last usage of xsk_buff_discard in mlx5e, so the function that is no longer used can be removed. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> CC: "Björn Töpel" <bjorn@kernel.org> CC: Magnus Karlsson <magnus.karlsson@intel.com> CC: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: xsk: Use KSM for unaligned XSKMaxim Mikityanskiy
UMR MTTs used in striding RQ have certain alignment requirements. While it's guaranteed to work when UMR pages are aligned to the UMR page size, in practice it works then UMR pages are aligned to 8 bytes. However, it's still not enough flexibility for the unaligned mode of XSK. This patch leverages KSM to map UMR pages without alignment requirements, when unaligned XSK is active. The downside is that KSM entries are twice as big as MTTs, which limits the maximum WQE size, so regular RQs and aligned XSK continue using MTTs. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5: Add MLX5_FLEXIBLE_INLEN to safely calculate cmd inlenMaxim Mikityanskiy
Some commands use a flexible array after a common header. Add a macro to safely calculate the total input length of the command, detecting overflows and printing errors with specific values when such overflows happen. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Keep a separate MKey for striding RQMaxim Mikityanskiy
Currently, rq->mkey_be keeps a big-endian value of either the PA MKey (for legacy RQ, no address translation) or MTT MKey (for striding RQ, direct address translation). Striding RQ stores the same value in rq->umr_mkey in the native endianness. The next commit will make striding RQ use KSM MKey (indirect address translation) for the unaligned mode of XSK, which will require storing both KSM MKey and PA MKey in the RQ struct. This commit optimizes fields of mlx5e_rq: umr_mkey is removed (it's redundant), mkey_be always points to the PA MKey, and mpwqe.umr_mkey_be points to the MTT MKey (or to the KSM MKey, starting from the next commit). Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: xsk: Use XSK frame size as striding RQ page sizeMaxim Mikityanskiy
XSK RQs support striding RQ linear mode, but the stride size is always set to PAGE_SIZE. It may be larger than the XSK frame size, unnecessarily reducing the useful space in a WQE, but more importantly causing UMEM data corruption in certain cases. Normally, stride size bigger than XSK frame size is not a problem if the hardware enforces the MTU. However, traffic between vports skips the hardware MTU check, and oversized packets may be received. If an oversized packet is bigger than the XSK frame but not bigger than the stride, it will cause overwriting of the adjacent UMEM region. If the packet takes more than one stride, they can be recycled for reuse so it's not a problem when the XSK frame size matches the stride size. To reduce the impact of the above issue, attempt to use the MTT page size for striding RQ that matches the XSK frame size, allowing to safely use 2048-byte frames on an up-to-date firmware. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30net/mlx5e: Use runtime page_shift for striding RQMaxim Mikityanskiy
This commit allows striding RQ to determine MTT page size at runtime, instead of sticking to the compile-time PAGE_SIZE. This functionality will be used by a following commit that adjusts the MTT page size to the XSK frame size. Stick with PAGE_SIZE for XSK on legacy RQ, as frag_stride is not used in data path, it only helps calculate how pages are partitioned into fragments, and PAGE_SIZE will ensure each fragment starts at the beginning of a new allocation unit (XSK frame). Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30xsk: Expose min chunk size to driversMaxim Mikityanskiy
Drivers should be aware of the range of valid UMEM chunk sizes to be able to allocate their internal structures of an appropriate size. It will be used by mlx5e in the following patches. Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> CC: "Björn Töpel" <bjorn@kernel.org> CC: Magnus Karlsson <magnus.karlsson@intel.com> CC: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-30ip6_vti:Remove the space before the commaHongbin Wang
There should be no space before the comma Signed-off-by: Hongbin Wang <wh_bin@126.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30Merge branch 'Mediatek-mt8188'David S. Miller
Jianguo Zhang says: ==================== Mediatek ethernet patches for mt8188 Changes in v7: v7: 1) Add 'Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>' info in commit message of patch 'dt-bindings: net: snps,dwmac: add new property snps,clk-csr', 'arm64: dts: mediatek: mt2712e: Update the name of property 'clk_csr'' and 'net: stmmac: add a parse for new property 'snps,clk-csr''. v6: 1) Update commit message of patch 'dt-bindings: net: snps,dwmac: add new property snps,clk-csr' 2) Add a parse for new property 'snps,clk-csr' in patch 'net: stmmac: add a parse for new property 'snps,clk-csr'' v5: 1) Rename the property 'clk_csr' as 'snps,clk-csr' in binding file as Krzysztof Kozlowski'comment. 2) Add DTS patch 'arm64: dts: mediatek: mt2712e: Update the name of property 'clk_csr'' as Krzysztof Kozlowski'comment. 3) Add driver patch 'net: stmmac: Update the name of property 'clk_csr'' as Krzysztof Kozlowski'comment. v4: 1) Update the commit message of patch 'dt-bindings: net: snps,dwmac: add clk_csr property' as Krzysztof Kozlowski'comment. v3: 1) List the names of SoCs mt8188 and mt8195 in correct order as AngeloGioacchino Del Regno's comment. 2) Add patch version info as Krzysztof Kozlowski'comment. v2: 1) Delete patch 'stmmac: dwmac-mediatek: add support for mt8188' as Krzysztof Kozlowski's comment. 2) Update patch 'dt-bindings: net: mediatek-dwmac: add support for mt8188' as Krzysztof Kozlowski's comment. 3) Add clk_csr property to fix warning ('clk_csr' was unexpected) when runnig 'make dtbs_check'. v1: 1) Add ethernet driver entry for mt8188. 2) Add binding document for ethernet on mt8188. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: stmmac: add a parse for new property 'snps,clk-csr'Jianguo Zhang
Parse new property 'snps,clk-csr' firstly because the new property is documented in binding file, if failed, fall back to old property 'clk_csr' for legacy case Signed-off-by: Jianguo Zhang <jianguo.zhang@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30arm64: dts: mediatek: mt2712e: Update the name of property 'clk_csr'Jianguo Zhang
Update the name of property 'clk_csr' as 'snps,clk-csr' to align with the property name in the binding file. Signed-off-by: Jianguo Zhang <jianguo.zhang@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30dt-bindings: net: snps,dwmac: add new property snps,clk-csrJianguo Zhang
Add description for new property snps,clk-csr in binding file Signed-off-by: Jianguo Zhang <jianguo.zhang@mediatek.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30dt-bindings: net: mediatek-dwmac: add support for mt8188Jianguo Zhang
Add binding document for the ethernet on mt8188 Signed-off-by: Jianguo Zhang <jianguo.zhang@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net/mlx5: Fix spelling mistake "syndrom" -> "syndrome"Colin Ian King
There is a spelling mistake in a devlink_health_report message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: bna: Fix spelling mistake "muliple" -> "multiple"Colin Ian King
There is a spelling mistake in a literal string in the array bnad_net_stats_strings. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30ibmveth: Ethtool set queue supportNick Child
Implement channel management functions to allow dynamic addition and removal of transmit queues. The `ethtool --show-channels` and `ethtool --set-channels` commands can be used to get and set the number of queues, respectively. Allow the ability to add as many transmit queues as available processors but never allow more than the hard maximum of 16. The number of receive queues is one and cannot be modified. Depending on whether the requested number of queues is larger or smaller than the current value, either allocate or free long term buffers. Since long term buffer construction and destruction can occur in two different areas, from either channel set requests or device open/close, define functions for performing this work. If allocation of a new buffer fails, then attempt to revert back to the previous number of queues. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30ibmveth: Implement multi queue on xmitNick Child
The `ndo_start_xmit` function is protected by a spinlock on the tx queue being used to transmit the skb. Allow concurrent calls to `ndo_start_xmit` by using more than one tx queue. This allows for greater throughput when several jobs are trying to transmit data. Introduce 16 tx queues (leave single rx queue as is) which each correspond to one DMA mapped long term buffer. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30ibmveth: Copy tx skbs into a premapped bufferNick Child
Rather than DMA mapping and unmapping every outgoing skb, copy the skb into a buffer that was mapped during the drivers open function. Copying the skb and its frags have proven to be more time efficient than mapping and unmapping. As an effect, performance increases by 3-5 Gbits/s. Allocate and DMA map one continuous 64KB buffer at `ndo_open`. This buffer is maintained until `ibmveth_close` is called. This buffer is large enough to hold the largest possible linnear skb. During `ndo_start_xmit`, copy the skb and all of it's frags into the continuous buffer. By manually linnearizing all the socket buffers, time is saved during memcpy as well as more efficient handling in FW. As a result, we no longer need to worry about the firmware limitation of handling a max of 6 frags. So, we only need to maintain 1 descriptor instead of 6 and can hardcode 0 for the other 5 descriptors during h_send_logical_lan. Since, DMA allocation/mapping issues can no longer arise in xmit functions, we can further reduce code size by removing the need for a bounce buffer on DMA errors. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30bnx2: Fix spelling mistake "bufferred" -> "buffered"Colin Ian King
There are spelling mistakes in two literal strings. Fix these. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: bridge: assign path_cost for 2.5G and 5G link speedSteven Hsieh
As 2.5G, 5G ethernet ports are more common and affordable, these ports are being used in LAN bridge devices. STP port_cost() is missing path_cost assignment for these link speeds, causes highest cost 100 being used. This result in lower speed port being picked when there is loop between 5G and 1G ports. Original path_cost: 10G=2, 1G=4, 100m=19, 10m=100 Adjusted path_cost: 10G=2, 5G=3, 2.5G=4, 1G=5, 100m=19, 10m=100 speed greater than 10G = 1 Signed-off-by: Steven Hsieh <steven.hsieh@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: lan966x: Fix spelling mistake "tarffic" -> "traffic"Colin Ian King
There is a spelling mistake in a netdev_err message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net-next: skbuff: refactor pskb_pullRichard Gobert
pskb_may_pull already contains all of the checks performed by pskb_pull. Use pskb_may_pull for validation in pskb_pull, eliminating the duplication and making __pskb_pull obsolete. Replace __pskb_pull with pskb_pull where applicable. Signed-off-by: Richard Gobert <richardbgobert@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: bonding: Convert to use sysfs_emit()/sysfs_emit_at() APIsWang Yufen
Follow the advice of the Documentation/filesystems/sysfs.rst and show() should only use sysfs_emit() or sysfs_emit_at() when formatting the value to be returned to user space. Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net-sysfs: Convert to use sysfs_emit() APIsWang Yufen
Follow the advice of the Documentation/filesystems/sysfs.rst and show() should only use sysfs_emit() or sysfs_emit_at() when formatting the value to be returned to user space. Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30net: tun: Convert to use sysfs_emit() APIsWang Yufen
Follow the advice of the Documentation/filesystems/sysfs.rst and show() should only use sysfs_emit() or sysfs_emit_at() when formatting the value to be returned to user space. Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30Merge branch 'net-tsnep-multiqueue'David S. Miller
Gerhard Engleder says: ==================== tsnep: multi queue support and some other improvements Add support for additional TX/RX queues along with RX flow classification support. Binding is extended to allow additional interrupts for additional TX/RX queues. Also dma-coherent is allowed as minor improvement. RX path optimisation is done by using page pool as preparations for future XDP support. v4: - rework dma-coherent commit message (Krzysztof Kozlowski) - fixed order of interrupt-names in binding (Krzysztof Kozlowski) - add line break between examples in binding (Krzysztof Kozlowski) - add RX_CLS_LOC_ANY support to RX flow classification v3: - now with changes in cover letter v2: - use netdev_name() (Jakub Kicinski) - use ENOENT if RX flow rule is not found (Jakub Kicinski) - eliminate return code of tsnep_add_rule() (Jakub Kicinski) - remove commit with lazy refill due to depletion problem (Jakub Kicinski) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30tsnep: Use page pool for RXGerhard Engleder
Use page pool for RX buffer handling. Makes RX path more efficient and is required prework for future XDP support. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30tsnep: Add EtherType RX flow classification supportGerhard Engleder
Received Ethernet frames are assigned to first RX queue per default. Based on EtherType Ethernet frames can be assigned to other RX queues. This enables processing of real-time Ethernet protocols on dedicated RX queues. Add RX flow classification interface for EtherType based RX queue assignment. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30tsnep: Support multiple TX/RX queue pairsGerhard Engleder
Support additional TX/RX queue pairs if dedicated interrupt is available. Interrupts are detected by name in device tree. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30tsnep: Move interrupt from device to queueGerhard Engleder
For multiple queues multiple interrupts shall be used. Therefore, rework global interrupt to per queue interrupt. Every interrupt name shall contain interface name and queue information. To get a valid interface name, the interrupt request needs to by done during open like in other drivers. Additionally, this allows the removal of some initialisation checks in the interrupt handler. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30dt-bindings: net: tsnep: Allow additional interruptsGerhard Engleder
Additional TX/RX queue pairs require dedicated interrupts. Extend binding with additional interrupts. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-30dt-bindings: net: tsnep: Allow dma-coherentGerhard Engleder
Within SoCs like ZynqMP, FPGA logic can be connected to different kinds of AXI master ports. Also cache coherent AXI master ports are available. The property "dma-coherent" is used to signal that DMA is cache coherent. Add "dma-coherent" property to allow the configuration of cache coherent DMA. Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-09-29Merge branch '100GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2022-09-28 (ice) Arkadiusz implements a single pin initialization function, checking feature bits, instead of having separate device functions and updates sub-device IDs for recognizing E810T devices. Martyna adds support for switchdev filters on VLAN priority field. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue: ice: Add support for VLAN priority filters in switchdev ice: support features on new E810T variants ice: Merge pin initialization of E810 and E810T adapters ==================== Link: https://lore.kernel.org/r/20220928203217.411078-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net: phy: Convert to use sysfs_emit() APIsWang Yufen
Follow the advice of the Documentation/filesystems/sysfs.rst and show() should only use sysfs_emit() or sysfs_emit_at() when formatting the value to be returned to user space. Signed-off-by: Wang Yufen <wangyufen@huawei.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/1664364860-29153-1-git-send-email-wangyufen@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29Merge branch 'add-tc-taprio-support-for-queuemaxsdu'Jakub Kicinski
Vladimir Oltean says: ==================== Add tc-taprio support for queueMaxSDU The tc-taprio offload mode supported by the Felix DSA driver has limitations surrounding its guard bands. The initial discussion was at: https://lore.kernel.org/netdev/c7618025da6723418c56a54fe4683bd7@walle.cc/ with the latest status being that we now have a vsc9959_tas_guard_bands_update() method which makes a best-guess attempt at how much useful space to reserve for packet scheduling in a taprio interval, and how much to reserve for guard bands. IEEE 802.1Q actually does offer a tunable variable (queueMaxSDU) which can determine the max MTU supported per traffic class. In turn we can determine the size we need for the guard bands, depending on the queueMaxSDU. This way we can make the guard band of small taprio intervals smaller than one full MTU worth of transmission time, if we know that said traffic class will transport only smaller packets. As discussed with Gerhard Engleder, the queueMaxSDU may also be useful in limiting the latency on an endpoint, if some of the TX queues are outside of the control of the Linux driver. https://patchwork.kernel.org/project/netdevbpf/patch/20220914153303.1792444-11-vladimir.oltean@nxp.com/ Allow input of queueMaxSDU through netlink into tc-taprio, offload it to the hardware I have access to (LS1028A), and (implicitly) deny non-default values to everyone else. Kurt Kanzenbach has also kindly tested and shared a patch to offload this to hellcreek. v3 at: https://patchwork.kernel.org/project/netdevbpf/cover/20220927234746.1823648-1-vladimir.oltean@nxp.com/ v2 at: https://patchwork.kernel.org/project/netdevbpf/list/?series=679954&state=* v1 at: https://patchwork.kernel.org/project/netdevbpf/cover/20220914153303.1792444-1-vladimir.oltean@nxp.com/ ==================== Link: https://lore.kernel.org/r/20220928095204.2093716-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net: enetc: offload per-tc max SDU from tc-taprioVladimir Oltean
The driver currently sets the PTCMSDUR register statically to the max MTU supported by the interface. Keep this logic if tc-taprio is absent or if the max_sdu for a traffic class is 0, and follow the requested max SDU size otherwise. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net: enetc: use common naming scheme for PTGCR and PTGCAPR registersVladimir Oltean
The Port Time Gating Control Register (PTGCR) and Port Time Gating Capability Register (PTGCAPR) have definitions in the driver which aren't in line with the other registers. Rename these. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net: enetc: cache accesses to &priv->si->hwVladimir Oltean
The &priv->si->hw construct dereferences 2 pointers and makes lines longer than they need to be, in turn making the code harder to read. Replace &priv->si->hw accesses with a "hw" variable when there are 2 or more accesses within a function that dereference this. This includes loops, since &priv->si->hw is a loop invariant. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net: dsa: hellcreek: Offload per-tc max SDU from tc-taprioKurt Kanzenbach
Add support for configuring the max SDU per priority and per port. If not specified, keep the default. Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net: dsa: hellcreek: refactor hellcreek_port_setup_tc() to use switch/caseVladimir Oltean
The following patch will need to make this function also respond to TC_QUERY_BASE, so make the processing more structured around the tc_setup_type. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net: dsa: felix: offload per-tc max SDU from tc-taprioVladimir Oltean
Our current vsc9959_tas_guard_bands_update() algorithm has a limitation imposed by the hardware design. To avoid packet overruns between one gate interval and the next (which would add jitter for scheduled traffic in the next gate), we configure the switch to use guard bands. These are as large as the largest packet which is possible to be transmitted. The problem is that at tc-taprio intervals of sizes comparable to a guard band, there isn't an obvious place in which to split the interval between the useful portion (for scheduling) and the guard band portion (where scheduling is blocked). For example, a 10 us interval at 1Gbps allows 1225 octets to be transmitted. We currently split the interval between the bare minimum of 33 ns useful time (required to schedule a single packet) and the rest as guard band. But 33 ns of useful scheduling time will only allow a single packet to be sent, be that packet 1200 octets in size, or 60 octets in size. It is impossible to send 2 60 octets frames in the 10 us window. Except that if we reduced the guard band (and therefore the maximum allowable SDU size) to 5 us, the useful time for scheduling is now also 5 us, so more packets could be scheduled. The hardware inflexibility of not scheduling according to individual packet lengths must unfortunately propagate to the user, who needs to tune the queueMaxSDU values if he wants to fit more small packets into a 10 us interval, rather than one large packet. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net/sched: taprio: allow user input of per-tc max SDUVladimir Oltean
IEEE 802.1Q clause 12.29.1.1 "The queueMaxSDUTable structure and data types" and 8.6.8.4 "Enhancements for scheduled traffic" talk about the existence of a per traffic class limitation of maximum frame sizes, with a fallback on the port-based MTU. As far as I am able to understand, the 802.1Q Service Data Unit (SDU) represents the MAC Service Data Unit (MSDU, i.e. L2 payload), excluding any number of prepended VLAN headers which may be otherwise present in the MSDU. Therefore, the queueMaxSDU is directly comparable to the device MTU (1500 means L2 payload sizes are accepted, or frame sizes of 1518 octets, or 1522 plus one VLAN header). Drivers which offload this are directly responsible of translating into other units of measurement. To keep the fast path checks optimized, we keep 2 arrays in the qdisc, one for max_sdu translated into frame length (so that it's comparable to skb->len), and another for offloading and for dumping back to the user. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net/sched: query offload capabilities through ndo_setup_tc()Vladimir Oltean
When adding optional new features to Qdisc offloads, existing drivers must reject the new configuration until they are coded up to act on it. Since modifying all drivers in lockstep with the changes in the Qdisc can create problems of its own, it would be nice if there existed an automatic opt-in mechanism for offloading optional features. Jakub proposes that we multiplex one more kind of call through ndo_setup_tc(): one where the driver populates a Qdisc-specific capability structure. First user will be taprio in further changes. Here we are introducing the definitions for the base functionality. Link: https://patchwork.kernel.org/project/netdevbpf/patch/20220923163310.3192733-3-vladimir.oltean@nxp.com/ Suggested-by: Jakub Kicinski <kuba@kernel.org> Co-developed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net/tipc: Remove unused struct distr_queue_itemYuan Can
After commit 09b5678c778f("tipc: remove dead code in tipc_net and relatives"), struct distr_queue_item is not used any more and can be removed as well. Signed-off-by: Yuan Can <yuancan@huawei.com> Acked-by: Jon Maloy <jmaloy@redhat.com> Link: https://lore.kernel.org/r/20220928085636.71749-1-yuancan@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net: skb: introduce and use a single page frag cachePaolo Abeni
After commit 3226b158e67c ("net: avoid 32 x truesize under-estimation for tiny skbs") we are observing 10-20% regressions in performance tests with small packets. The perf trace points to high pressure on the slab allocator. This change tries to improve the allocation schema for small packets using an idea originally suggested by Eric: a new per CPU page frag is introduced and used in __napi_alloc_skb to cope with small allocation requests. To ensure that the above does not lead to excessive truesize underestimation, the frag size for small allocation is inflated to 1K and all the above is restricted to build with 4K page size. Note that we need to update accordingly the run-time check introduced with commit fd9ea57f4e95 ("net: add napi_get_frags_check() helper"). Alex suggested a smart page refcount schema to reduce the number of atomic operations and deal properly with pfmemalloc pages. Under small packet UDP flood, I measure a 15% peak tput increases. Suggested-by: Eric Dumazet <eric.dumazet@gmail.com> Suggested-by: Alexander H Duyck <alexanderduyck@fb.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Link: https://lore.kernel.org/r/6b6f65957c59f86a353fc09a5127e83a32ab5999.1664350652.git.pabeni@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29net: sched: cls_u32: Avoid memcpy() false-positive warningKees Cook
To work around a misbehavior of the compiler's ability to see into composite flexible array structs (as detailed in the coming memcpy() hardening series[1]), use unsafe_memcpy(), as the sizing, bounds-checking, and allocation are all very tightly coupled here. This silences the false-positive reported by syzbot: memcpy: detected field-spanning write (size 80) of single field "&n->sel" at net/sched/cls_u32.c:1043 (size 16) [1] https://lore.kernel.org/linux-hardening/20220901065914.1417829-2-keescook@chromium.org Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jiri Pirko <jiri@resnulli.us> Reported-by: syzbot+a2c4601efc75848ba321@syzkaller.appspotmail.com Link: https://lore.kernel.org/lkml/000000000000a96c0b05e97f0444@google.com/ Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com> Link: https://lore.kernel.org/r/20220927153700.3071688-1-keescook@chromium.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-09-29dt-bindings: net: snps,dwmac: Document stmmac-axi-config subnodeMarek Vasut
The stmmac-axi-config subnode is present in multiple dwmac instance DTs, document its content per snps,axi-config property description which is a phandle to this subnode. Signed-off-by: Marek Vasut <marex@denx.de> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20220927012449.698915-1-marex@denx.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>