summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2023-03-29tty: serial: fsl_lpuart: avoid checking for transfer complete when ↵Sherry Sun
UARTCTRL_SBK is asserted in lpuart32_tx_empty According to LPUART RM, Transmission Complete Flag becomes 0 if queuing a break character by writing 1 to CTRL[SBK], so here need to avoid checking for transmission complete when UARTCTRL_SBK is asserted, otherwise the lpuart32_tx_empty may never get TIOCSER_TEMT. Commit 2411fd94ceaa("tty: serial: fsl_lpuart: skip waiting for transmission complete when UARTCTRL_SBK is asserted") only fix it in lpuart32_set_termios(), here also fix it in lpuart32_tx_empty(). Fixes: 380c966c093e ("tty: serial: fsl_lpuart: add 32-bit register interface support") Cc: stable <stable@kernel.org> Signed-off-by: Sherry Sun <sherry.sun@nxp.com> Link: https://lore.kernel.org/r/20230323054415.20363-1-sherry.sun@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-29serial: 8250: Prevent starting up DMA Rx on THRI interruptIlpo Järvinen
Hans de Goede reported Bluetooth adapters (HCIs) connected over an UART connection failed due corrupted Rx payload. The problem was narrowed down to DMA Rx starting on UART_IIR_THRI interrupt. The problem occurs despite LSR having DR bit set, which is precondition for attempting to start DMA Rx in the first place. From a debug patch: [x.807834] 8250irq: iir=cc lsr+saved=60 received=0/15 ier=0f dma_t/rx/err=0/0/0 [x.808676] 8250irq: iir=c2 lsr+saved=61 received=0/0 ier=0f dma_t/rx/err=0/0/0 [x.808776] 8250irq: iir=cc lsr+saved=60 received=1/12 ier=0d dma_t/rx/err=0/1/0 [x.808870] Bluetooth: hci0: Frame reassembly failed (-84) In the debug snippet, received field indicates 1 byte was transferred over DMA and 12 bytes after that with the non-DMA Rx. The sole byte DMA handled was corrupted (gets zeroed) which leads to the HCI failure. This problem became apparent after commit e8ffbb71f783 ("serial: 8250: use THRE & __stop_tx also with DMA") changed Tx stop behavior. Tx stop is now triggered from a THRI interrupt. Despite that this problem looks like a HW bug, this fix is not adding UART_BUG_xx flag to the driver beucase it seems useful in general to avoid starting DMA when there are only a few bytes to transfer. Skipping DMA for small transfers avoids the extra overhead DMA incurs. Thus, don't setup DMA Rx on UART_IIR_THRI but leave it to a subsequent interrupt which has Rx a related IIR value. By returning false from handle_rx_dma(), the DMA vs non-DMA decision is postponed until either UART_IIR_RDI (FIFO threshold worth of bytes awaiting) or UART_IIR_TIMEOUT (inter-character timeout) triggers at a later time which allows better to discern whether the number of bytes warrants starting DMA or not. Reported-by: Hans de Goede <hdegoede@redhat.com> Tested-by: Hans de Goede <hdegoede@redhat.com> Fixes: e8ffbb71f783 ("serial: 8250: use THRE & __stop_tx also with DMA") Cc: stable@vger.kernel.org Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Acked-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20230317103034.12881-1-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-29tty: serial: sh-sci: Fix transmit end interrupt handlerBiju Das
The fourth interrupt on SCI port is transmit end interrupt compared to the break interrupt on other port types. So, shuffle the interrupts to fix the transmit end interrupt handler. Fixes: e1d0be616186 ("sh-sci: Add h8300 SCI") Cc: stable <stable@kernel.org> Suggested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Link: https://lore.kernel.org/r/20230317150403.154094-1-biju.das.jz@bp.renesas.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-29usb: xhci: tegra: fix sleep in atomic callWayne Chang
When we set the dual-role port to Host mode, we observed the following splat: [ 167.057718] BUG: sleeping function called from invalid context at include/linux/sched/mm.h:229 [ 167.057872] Workqueue: events tegra_xusb_usb_phy_work [ 167.057954] Call trace: [ 167.057962] dump_backtrace+0x0/0x210 [ 167.057996] show_stack+0x30/0x50 [ 167.058020] dump_stack_lvl+0x64/0x84 [ 167.058065] dump_stack+0x14/0x34 [ 167.058100] __might_resched+0x144/0x180 [ 167.058140] __might_sleep+0x64/0xd0 [ 167.058171] slab_pre_alloc_hook.constprop.0+0xa8/0x110 [ 167.058202] __kmalloc_track_caller+0x74/0x2b0 [ 167.058233] kvasprintf+0xa4/0x190 [ 167.058261] kasprintf+0x58/0x90 [ 167.058285] tegra_xusb_find_port_node.isra.0+0x58/0xd0 [ 167.058334] tegra_xusb_find_port+0x38/0xa0 [ 167.058380] tegra_xusb_padctl_get_usb3_companion+0x38/0xd0 [ 167.058430] tegra_xhci_id_notify+0x8c/0x1e0 [ 167.058473] notifier_call_chain+0x88/0x100 [ 167.058506] atomic_notifier_call_chain+0x44/0x70 [ 167.058537] tegra_xusb_usb_phy_work+0x60/0xd0 [ 167.058581] process_one_work+0x1dc/0x4c0 [ 167.058618] worker_thread+0x54/0x410 [ 167.058650] kthread+0x188/0x1b0 [ 167.058672] ret_from_fork+0x10/0x20 The function tegra_xusb_padctl_get_usb3_companion eventually calls tegra_xusb_find_port and this in turn calls kasprintf which might sleep and so cannot be called from an atomic context. Fix this by moving the call to tegra_xusb_padctl_get_usb3_companion to the tegra_xhci_id_work function where it is really needed. Fixes: f836e7843036 ("usb: xhci-tegra: Add OTG support") Cc: stable@vger.kernel.org Signed-off-by: Wayne Chang <waynec@nvidia.com> Signed-off-by: Haotien Hsu <haotienh@nvidia.com> Link: https://lore.kernel.org/r/20230327095548.1599470-1-haotienh@nvidia.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-29net: wwan: iosm: fixes 7560 modem crashM Chetan Kumar
ModemManger/Apps probing the wwan0xmmrpc0 port for 7560 Modem results in modem crash. 7560 Modem FW uses the MBIM interface for control command communication whereas 7360 uses Intel RPC interface so disable wwan0xmmrpc0 port for 7560. Fixes: d08b0f8f46e4 ("net: wwan: iosm: add rpc interface for xmm modems") Reported-and-tested-by: Martin <mwolf@adiumentum.com> Link: https://bugzilla.kernel.org/show_bug.cgi?id=217200 Signed-off-by: M Chetan Kumar <m.chetan.kumar@linux.intel.com> Signed-off-by: Shane Parslow <shaneparslow808@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29net: hns3: support wake on lan configuration and queryHao Lan
The HNS3 driver supports Wake-on-LAN, which can wake up the server from power off state to power on state by magic packet or magic security packet. ChangeLog: v1->v2: Deleted the debugfs function that overlaps with the ethtool function from suggestion of Andrew Lunn. v2->v3: Return the wol configuration stored in driver, suggested by Alexander H Duyck. v3->v4: Add a helper to go from netdev to the local struct, suggested by Simon Horman and Jakub Kicinski. Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Hao Lan <lanhao@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: add offloading of 'foreign' TC (decap) rulesEdward Cree
A 'foreign' rule is one for which the net_dev is not the sfc netdevice or any of its representors. The driver registers indirect flow blocks for tunnel netdevs so that it can offload decap rules. For example: tc filter add dev vxlan0 parent ffff: protocol ipv4 flower \ enc_src_ip 10.1.0.2 enc_dst_ip 10.1.0.1 \ enc_key_id 1000 enc_dst_port 4789 \ action tunnel_key unset \ action mirred egress redirect dev $REPRESENTOR When notified of a rule like this, register an encap match on the IP and dport tuple (creating an Outer Rule table entry) and insert an MAE action rule to perform the decapsulation and deliver to the representee. Moved efx_tc_delete_rule() below efx_tc_flower_release_encap_match() to avoid the need for a forward declaration. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: add code to register and unregister encap matchesEdward Cree
Add a hashtable to detect duplicate and conflicting matches. If match is not a duplicate, call MAE functions to add/remove it from OR table. Calling code not added yet, so mark the new functions as unused. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: add functions to insert encap matches into the MAEEdward Cree
An encap match corresponds to an entry in the exact-match Outer Rule table; the lookup response includes the encap type (protocol) allowing the hardware to continue parsing into the inner headers. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: handle enc keys in efx_tc_flower_parse_match()Edward Cree
Translate the fields from flow dissector into struct efx_tc_match. In efx_tc_flower_replace(), reject filters that match on them, because only 'foreign' filters (i.e. those for which the ingress dev is not the sfc netdev or any of its representors, e.g. a tunnel netdev) can use them. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: add notion of match on enc keys to MAE machineryEdward Cree
Extend the MAE caps check to validate that the hardware supports these outer-header matches where used by the driver. Extend efx_mae_populate_match_criteria() to fill in the outer rule ID and VNI match fields. Nothing yet populates these match fields, nor creates outer rules. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29sfc: document TC-to-EF100-MAE action translation conceptsEdward Cree
Includes an explanation of the lifetime of the 'cursor' action-set `act`. Signed-off-by: Edward Cree <ecree.xilinx@gmail.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29macvlan: Add netlink attribute for broadcast cutoffHerbert Xu
Make the broadcast cutoff configurable through netlink. Note that macvlan is weird because there is no central device for us to configure (the lowerdev could be anything). So all the options are duplicated over what could be thousands of child devices. IFLA_MACVLAN_BC_QUEUE_LEN took the approach of taking the maximum of all child device settings. This is unnecessary as we could simply store the option in the port device and take the last child device that gets updated as the value to use. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29macvlan: Skip broadcast queue if multicast with single receiverHerbert Xu
As it stands all broadcast and multicast packets are queued and processed in a work queue. This is so that we don't overwhelm the receive softirq path by generating thousands of packets or more (see commit 412ca1550cbe "macvlan: Move broadcasts into a work queue"). As such all multicast packets will be delayed, even if they will be received by a single macvlan device. As using a workqueue is not free in terms of latency, we should avoid this where possible. This patch adds a new filter to determine which addresses should be delayed and which ones won't. This is done using a crude counter of how many times an address has been added to the macvlan port (ha->synced). For now if an address has been added more than once, then it will be considered to be broadcast. This could be tuned further by making this threshold configurable. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29ipv6: Remove in6addr_any alternatives.Kuniyuki Iwashima
Some code defines the IPv6 wildcard address as a local variable and use it with memcmp() or ipv6_addr_equal(). Let's use in6addr_any and ipv6_addr_any() instead. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-29vsock: support sockmapBobby Eshleman
This patch adds sockmap support for vsock sockets. It is intended to be usable by all transports, but only the virtio and loopback transports are implemented. SOCK_STREAM, SOCK_DGRAM, and SOCK_SEQPACKET are all supported. Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-03-28Merge tag 'mlx5-updates-2023-03-20' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== mlx5-updates-2023-03-20 mlx5 dynamic msix This patch series adds support for dynamic msix vectors allocation in mlx5. Eli Cohen Says: ================ The following series of patches modifies mlx5_core to work with the dynamic MSIX API. Currently, mlx5_core allocates all the interrupt vectors it needs and distributes them amongst the consumers. With the introduction of dynamic MSIX support, which allows for allocation of interrupts more than once, we now allocate vectors as we need them. This allows other drivers running on top of mlx5_core to allocate interrupt vectors for their own use. An example for this is mlx5_vdpa, which uses these vectors to propagate interrupts directly from the hardware to the vCPU [1]. As a preparation for using this series, a use after free issue is fixed in lib/cpu_rmap.c and the allocator for rmap entries has been modified. A complementary API for irq_cpu_rmap_add() has also been introduced. [1] https://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux.git/patch/?id=0f2bf1fcae96a83b8c5581854713c9fc3407556e ================ * tag 'mlx5-updates-2023-03-20' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux: net/mlx5: Provide external API for allocating vectors net/mlx5: Use one completion vector if eth is disabled net/mlx5: Refactor calculation of required completion vectors net/mlx5: Move devlink registration before mlx5_load net/mlx5: Use dynamic msix vectors allocation net/mlx5: Refactor completion irq request/release code net/mlx5: Improve naming of pci function vectors net/mlx5: Use newer affinity descriptor net/mlx5: Modify struct mlx5_irq to use struct msi_map net/mlx5: Fix wrong comment net/mlx5e: Coding style fix, add empty line lib: cpu_rmap: Add irq_cpu_rmap_remove to complement irq_cpu_rmap_add lib: cpu_rmap: Use allocator for rmap entries lib: cpu_rmap: Avoid use after free on rmap->obj array entries ==================== Link: https://lore.kernel.org/r/20230324231341.29808-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-28net: ethernet: 8390: axnet_cs: remove unused xfer_count variableTom Rix
clang with W=1 reports drivers/net/ethernet/8390/axnet_cs.c:653:9: error: variable 'xfer_count' set but not used [-Werror,-Wunused-but-set-variable] int xfer_count = count; ^ This variable is not used so remove it. Signed-off-by: Tom Rix <trix@redhat.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230327235423.1777590-1-trix@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-28net: ethernet: mtk_eth_soc: fix tx throughput regression with direct 1G linksFelix Fietkau
Using the QDMA tx scheduler to throttle tx to line speed works fine for switch ports, but apparently caused a regression on non-switch ports. Based on a number of tests, it seems that this throttling can be safely dropped without re-introducing the issues on switch ports that the tx scheduling changes resolved. Link: https://lore.kernel.org/netdev/trinity-92c3826f-c2c8-40af-8339-bc6d0d3ffea4-1678213958520@3c-app-gmx-bs16/ Fixes: f63959c7eec3 ("net: ethernet: mtk_eth_soc: implement multi-queue support for per-port queues") Reported-by: Frank Wunderlich <frank-w@public-files.de> Reported-by: Daniel Golle <daniel@makrotopia.org> Tested-by: Daniel Golle <daniel@makrotopia.org> Signed-off-by: Felix Fietkau <nbd@nbd.name> Link: https://lore.kernel.org/r/20230324140404.95745-1-nbd@nbd.name Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-28Revert "sh_eth: remove open coded netif_running()"Wolfram Sang
This reverts commit ce1fdb065695f49ef6f126d35c1abbfe645d62d5. It turned out this actually introduces a race condition. netif_running() is not a suitable check for get_stats. Reported-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230327152112.15635-1-wsa+renesas@sang-engineering.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-28net/mlx5e: Fix build break on 32bitSaeed Mahameed
The cited commit caused the following build break in mlx5 due to a change in size of MAX_SKB_FRAGS. error: format '%lu' expects argument of type 'long unsigned int', but argument 7 has type 'unsigned int' [-Werror=format=] Fix this by explicit casting. Fixes: 3948b05950fd ("net: introduce a config option to tweak MAX_SKB_FRAGS") Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20230328200723.125122-1-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-03-28net/mlx5e: RX, Remove unnecessary recycle parameter and page_cache statsDragos Tatulea
The recycle parameter used during page release is no longer necessary: the page pool can detect when the page cannot be recycled to the cache or ring without any outside hint. The page pool will also take care of cleaning up after itself once all the inflight pages have been released. So no need to explicitly release pages to the system. Remove the internal page_cache stats as the mlx5e_page_cache struct no longer exists. Delete the documentation entries along with the stats. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Break the wqe bulk refill in smaller chunksDragos Tatulea
To avoid overflowing the page pool's cache, don't release the whole bulk which is usually larger than the cache refill size. Group release+alloc instead into cache refill units that allow releasing to the cache and then allocating from the cache. A refill_unit variable is added as a iteration unit over the wqe_bulk when doing release+alloc. For a single ring, single core, default MTU (1500) TCP stream test the number of pages allocated from the cache directly (rx_pp_recycle_cached) increases from 0% to 52%: +---------------------------------------------+ | Page Pool stats (/sec) | Before | After | +-------------------------+---------+---------+ |rx_pp_alloc_fast | 2145422 | 2193802 | |rx_pp_alloc_slow | 2 | 0 | |rx_pp_alloc_empty | 2 | 0 | |rx_pp_alloc_refill | 34059 | 16634 | |rx_pp_alloc_waive | 0 | 0 | |rx_pp_recycle_cached | 0 | 1145818 | |rx_pp_recycle_cache_full | 0 | 0 | |rx_pp_recycle_ring | 2179361 | 1064616 | |rx_pp_recycle_ring_full | 121 | 0 | +---------------------------------------------+ With this patch, the performance for legacy rq for the above test is back to baseline. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Increase WQE bulk size for legacy rqDragos Tatulea
Deferred page release was added to legacy rq but its desired effect (driver releases last fragment to page pool cache) is not yet visible due to the WQE bulks being too small. This patch increases the WQE bulk size to span 512 KB and clip it to one quarter of the rx queue size. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Split off release path for xsk buffers for legacy rqDragos Tatulea
Don't mix xsk buffer releases with page releases anymore. This is needed for handling of deferred page release. Add a new bulk free function for xsk buffers from wqe frags. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Defer page release in legacy rq for better recyclingDragos Tatulea
Currently, fragmented pages from the page pool can be released in two ways: 1) In the mlx5e driver when trimming off the unused fragments AND the associated skb fragments have been released. This path allows recycling of pages to the page pool cache (allow_direct == true). 2) On the skb release path (last fragment release), which will always release pages to the page pool ring (allow_direct == false). Whichever is releasing the last fragment will be decisive on where the page gets released: the cache or the ring. So we obviously want to maximize for doing the release from 1. This patch does that by deferring the release of page fragments right before requesting new ones from the page pool. A flag is added to make sure that there's no release before first alloc and that XDP_TX fragments are not released prematurely. This is a preparation patch that doesn't unlock the performance improvements yet. A followup patch will do that. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Change wqe last_in_page field from bool to bit flagsDragos Tatulea
Change the bool flag to a bitfield as we'll use it in a downstream patch in the series to add signaling about skipping a fragment release. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Defer page release in striding rq for better recyclingDragos Tatulea
Currently, for striding RQ, fragmented pages from the page pool can get released in two ways: 1) In the mlx5e driver when trimming off the unused fragments AND the associated skb fragments have been released. This path allows recycling of pages to the page pool cache (allow_direct == true). 2) On the skb release path (last fragment release), which will always release pages to the page pool ring (allow_direct == false). Whichever is releasing the last fragment will be decisive on where the page gets released: the cache or the ring. So we obviously want to maximize for doing the release from 1. This patch does that by deferring the release of page fragments right before requesting new ones from the page pool. Extra care needs to be taken for the corner cases: * On first call, make sure that release is not called. The skip_release_bitmap is used for this purpose. * On rq shutdown, make sure that all wqes that were not in the linked list are released. For a single ring, single core, default MTU (1500) TCP stream test the number of pages allocated from the cache directly (rx_pp_recycle_cached) increases from 31 % to 98 %: +----------------------------------------------+ | Page Pool stats (/sec) | Before | After | +-------------------------+---------+----------+ |rx_pp_alloc_fast | 2137754 | 2261033 | |rx_pp_alloc_slow | 47 | 9 | |rx_pp_alloc_empty | 47 | 9 | |rx_pp_alloc_refill | 23230 | 819 | |rx_pp_alloc_waive | 0 | 0 | |rx_pp_recycle_cached | 672182 | 2209015 | |rx_pp_recycle_cache_full | 1789 | 0 | |rx_pp_recycle_ring | 1485848 | 52259 | |rx_pp_recycle_ring_full | 3003 | 584 | +----------------------------------------------+ With this patch, the performance in striding rq for the above test is back to baseline. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Rename xdp_xmit_bitmap to a more generic nameDragos Tatulea
The xdp_xmit_bitmap currently serves only one purpose: to avoid releasing pages that are still in use due to XDP TX. A following patch will use this bitmap in a slightly different context but for the same purpose. So rename the bitmap to a more generic name that reflects the purpose not the context. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Enable skb page recycling through the page_poolDragos Tatulea
Start using the page_pool skb recycling api to recycle all pages back to the page pool and stop using atomic page reference counting. The mlx5e driver used to manage in-flight pages using page refcounting: for each fragment there were 2 atomic write operations happening (one for building the skb and one on skb release). The page_pool api introduced a method to track page fragments more optimally: * The page's pp_fragment_count is set to a large bias on page alloc (1 x atomic write operation). * The driver tracks the actual page fragments in a non atomic variable. * When the skb is recycled, pp_fragment_count is decremented (atomic write operation). * When page is released in the driver, the unused number of fragments (relative to the bias) is deducted from pp_fragment_count (atomic write operation). * Last page defragmentation will only be an atomic read. So in total there are `number of fragments + 1` atomic write ops. As opposed to previously: `2 * frags` atomic writes ops. Pages are wrapped in a mlx5e_frag_page structure which also contains the number of fragments. This makes it easy to count the fragments in the driver. This change brings performance improvements for the case when the old rx page_cache had low recycling rates due to head of queue blocking. For a iperf3 TCP test with a single stream, on a single core (iperf and receive queue running on same core), the following improvements can be noticed: * Striding rq: - before (net-next baseline): bitrate = 30.1 Gbits/sec - after : bitrate = 31.4 Gbits/sec (diff: 4.14 %) * Legacy rq: - before (net-next baseline): bitrate = 30.2 Gbits/sec - after : bitrate = 33.0 Gbits/sec (diff: 8.48 %) There are 2 temporary performance degradations introduced: 1) TCP streams that had a good recycling rate with the old page_cache have a degradation for both striding and linear rq. This is due to very low page pool cache recycling: the pages are released during skb recycle which will release pages to the page pool ring for safety. The following patches in this series will tackle this problem by deferring the page release in the driver to increase the chance of having pages recycled to the cache. 2) XDP performance is now lower (4-5 %) due to the higher number of atomic operations used for fragment management. But this opens the door for supporting multiple packets per page in XDP, which will bring a big gain. Otherwise, performance is similar to baseline. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Enable dma map and sync from page_pool allocatorDragos Tatulea
Remove driver dma mapping and unmapping of pages. Let the page_pool api do it. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Remove internal page_cacheDragos Tatulea
This patch removes the internal rx page_cache and uses the generic page_pool api only. It used to be that the page_pool couldn't handle all the mlx5 driver usecases, but with the introduction of skb recycling and page fragmentaton in the page_pool full switch can now be made. Some benfits of this transition: * Better page recycling in the cases when the page_cache was suffering from head of queue blocking. The page_pool doesn't have this issue. * DMA mapping/unmapping can be managed by the page_pool. * mlx5e_rq size reduced by more than 50% due to the page_cache array being deleted. This patch only removes the page_cache. Downstream patches will enable the required page_pool features and will add further fine-tuning. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Store SHAMPO header pages in arrayDragos Tatulea
Save allocated SHAMPO header pages to an array to which the mlx5e_dma_info page will point to. This change is a preparation for introducing mlx5e_frag_page structure in a downstream patch. There's no new functionality introduced. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Remove alloc unit layout constraint for striding rqDragos Tatulea
This change removes the usage of mlx5e_alloc_unit union for striding rq. The change is more straightforward than legacy rq as the alloc units union is already in place. This patch only moves things around: instead of an array of unions make it a union of arrays. This has the effect that each mlx5e_mpw_info will allocate the largest possible size of the array member. For xsk this means that the array of xdp_buff pointers for the wqe will still be contiguous, but there will be some extra unused bytes at the end of the array. As further patch in the series will add the mlx5e_frag_page struct for which the described size constraint will no longer hold. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Remove alloc unit layout constraint for legacy rqDragos Tatulea
The mlx5e_alloc_unit union is conveniently used to store arrays of pointers to struct page or struct xdp_buff (for xsk). The union is currently expected to have the size of a pointer for xsk batch allocations to work. This is conveniet for the current state of the code but makes it impossible to add a structure of a different size to the alloc unit. A further patch in the series will add the mlx5e_frag_page struct for which the described size constraint will no longer hold. This change removes the usage of mlx5e_alloc_unit union for legacy rq: - A union of arrays is introduced (mlx5e_alloc_units) to replace the array of unions to allow structures of different sizes. - Each fragment has a pointer to a unit in the mlx5e_alloc_units array. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28net/mlx5e: RX, Remove mlx5e_alloc_unit argument in page allocationDragos Tatulea
Change internal page cache and page pool api to use a struct page ** instead of a mlx5e_alloc_unit *. This is the first change in a series which is meant to remove the mlx5e_alloc_unit altogether. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2023-03-28thermal: core: Drop excessive lockdep_assert_held() callsRafael J. Wysocki
The lockdep_assert_held() calls added to cooling_device_stats_setup() and cooling_device_stats_destroy() by commit 790930f44289 ("thermal: core: Introduce thermal_cooling_device_update()") trigger false-positive lockdep reports in code paths that are not subject to race conditions (before cooling device registration and after cooling device removal). For this reason, remove the lockdep_assert_held() calls from both cooling_device_stats_setup() and cooling_device_stats_destroy() and add one to thermal_cooling_device_stats_reinit() that has to be called under the cdev lock. Fixes: 790930f44289 ("thermal: core: Introduce thermal_cooling_device_update()") Link: https://lore.kernel.org/linux-acpi/ZCIDTLFt27Ei7+V6@ideak-desk.fi.intel.com Reported-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-03-28Merge tag 's390-6.3-4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 fixes from Vasily Gorbik: - Fix an error handling issue with PTRACE_GET_LAST_BREAK request so that -EFAULT is returned if put_user() fails, instead of ignoring it - Fix a build race for the modules_prepare target when CONFIG_EXPOLINE_EXTERN is enabled by reintroducing the dependence on scripts - Fix a memory leak in vfio_ap device driver - Add missing earlyclobber annotations to __clear_user() inline assembly to prevent incorrect register allocation * tag 's390-6.3-4' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/ptrace: fix PTRACE_GET_LAST_BREAK error handling s390: reintroduce expoline dependence to scripts s390/vfio-ap: fix memory leak in vfio_ap device driver s390/uaccess: add missing earlyclobber annotations to __clear_user()
2023-03-28ice: fix invalid check for empty list in ice_sched_assoc_vsi_to_agg()Jakob Koschel
The code implicitly assumes that the list iterator finds a correct handle. If 'vsi_handle' is not found the 'old_agg_vsi_info' was pointing to an bogus memory location. For safety a separate list iterator variable should be used to make the != NULL check on 'old_agg_vsi_info' correct under any circumstances. Additionally Linus proposed to avoid any use of the list iterator variable after the loop, in the attempt to move the list iterator variable declaration into the macro to avoid any potential misuse after the loop. Using it in a pointer comparison after the loop is undefined behavior and should be omitted if possible [1]. Fixes: 37c592062b16 ("ice: remove the VSI info from previous agg") Link: https://lore.kernel.org/all/CAHk-=wgRr_D8CB-D9Kg-c=EHreAsk5SqXPwr9Y7k9sA6cWXJ6w@mail.gmail.com/ [1] Signed-off-by: Jakob Koschel <jkl820.git@gmail.com> Tested-by: Arpana Arland <arpanax.arland@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-03-28ice: add profile conflict check for AVF FDIRJunfeng Guo
Add profile conflict check while adding some FDIR rules to avoid unexpected flow behavior, rules may have conflict including: IPv4 <---> {IPv4_UDP, IPv4_TCP, IPv4_SCTP} IPv6 <---> {IPv6_UDP, IPv6_TCP, IPv6_SCTP} For example, when we create an FDIR rule for IPv4, this rule will work on packets including IPv4, IPv4_UDP, IPv4_TCP and IPv4_SCTP. But if we then create an FDIR rule for IPv4_UDP and then destroy it, the first FDIR rule for IPv4 cannot work on pkt IPv4_UDP then. To prevent this unexpected behavior, we add restriction in software when creating FDIR rules by adding necessary profile conflict check. Fixes: 1f7ea1cd6a37 ("ice: Enable FDIR Configure for AVF") Signed-off-by: Junfeng Guo <junfeng.guo@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-03-28ice: Fix ice_cfg_rdma_fltr() to only update relevant fieldsBrett Creeley
The current implementation causes ice_vsi_update() to update all VSI fields based on the cached VSI context. This also assumes that the ICE_AQ_VSI_PROP_Q_OPT_VALID bit is set. This can cause problems if the VSI context is not correctly synced by the driver. Fix this by only updating the fields that correspond to ICE_AQ_VSI_PROP_Q_OPT_VALID. Also, make sure to save the updated result in the cached VSI context on success. Fixes: 348048e724a0 ("ice: Implement iidc operations") Co-developed-by: Robert Malz <robertx.malz@intel.com> Signed-off-by: Robert Malz <robertx.malz@intel.com> Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Piotr Raczynski <piotr.raczynski@intel.com> Tested-by: Jakub Andrysiak <jakub.andrysiak@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-03-28ice: fix W=1 headers mismatchJesse Brandeburg
make modules W=1 returns: .../ice/ice_txrx_lib.c:448: warning: Function parameter or member 'first_idx' not described in 'ice_finalize_xdp_rx' .../ice/ice_txrx.c:948: warning: Function parameter or member 'ntc' not described in 'ice_get_rx_buf' .../ice/ice_txrx.c:1038: warning: Excess function parameter 'rx_buf' description in 'ice_construct_skb' Fix these warnings by adding and deleting the deviant arguments. Fixes: 2fba7dc5157b ("ice: Add support for XDP multi-buffer on Rx side") Fixes: d7956d81f150 ("ice: Pull out next_to_clean bump out of ice_put_rx_buf()") CC: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Piotr Raczynski <piotr.raczynski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2023-03-28iommu/exynos: Fix set_platform_dma_ops() callbackMarek Szyprowski
There are some subtle differences between release_device() and set_platform_dma_ops() callbacks, so separate those two callbacks. Device links should be removed only in release_device(), because they were created in probe_device() on purpose and they are needed for proper Exynos IOMMU driver operation. While fixing this, remove the conditional code as it is not really needed. Reported-by: Jason Gunthorpe <jgg@ziepe.ca> Fixes: 189d496b48b1 ("iommu/exynos: Add missing set_platform_dma_ops callback") Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Sam Protsenko <semen.protsenko@linaro.org> Link: https://lore.kernel.org/r/20230315232514.1046589-1-m.szyprowski@samsung.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-03-28net: ethernet: ti: am65-cpsw: enable p0 host port rx_vlan_remapGrygorii Strashko
By default, the tagged ingress packets to the switch from the host port P0 get internal switch priority assigned equal to the DMA CPPI channel number they came from, unless CPSW_P0_CONTROL_REG.RX_REMAP_VLAN is enabled. This causes issues with applying QoS policies and mapping packets on external port fifos, because the default configuration is vlan_aware and DMA CPPI channels are shared between all external ports. Hence enable CPSW_P0_CONTROL_REG.RX_REMAP_VLAN so packet will preserve internal switch priority assigned following the VLAN(priority) tag no matter through which DMA CPPI Channels packets enter the switch. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Link: https://lore.kernel.org/r/20230327092103.3256118-1-s-vadapalli@ti.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-03-28net: ethernet: ti: am65-cpsw: add .ndo to set dma per-queue rateGrygorii Strashko
Enable rate limiting TX DMA queues for CPSW interface by configuring the rate in absolute Mb/s units per TX queue. Example: ethtool -L eth0 tx 4 echo 100 > /sys/class/net/eth0/queues/tx-0/tx_maxrate echo 200 > /sys/class/net/eth0/queues/tx-1/tx_maxrate echo 50 > /sys/class/net/eth0/queues/tx-2/tx_maxrate echo 30 > /sys/class/net/eth0/queues/tx-3/tx_maxrate # disable echo 0 > /sys/class/net/eth0/queues/tx-0/tx_maxrate Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Link: https://lore.kernel.org/r/20230327085758.3237155-1-s-vadapalli@ti.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-03-28pinctrl: amd: Disable and mask interrupts on resumeKornel Dulęba
This fixes a similar problem to the one observed in: commit 4e5a04be88fe ("pinctrl: amd: disable and mask interrupts on probe"). On some systems, during suspend/resume cycle firmware leaves an interrupt enabled on a pin that is not used by the kernel. This confuses the AMD pinctrl driver and causes spurious interrupts. The driver already has logic to detect if a pin is used by the kernel. Leverage it to re-initialize interrupt fields of a pin only if it's not used by us. Cc: stable@vger.kernel.org Fixes: dbad75dd1f25 ("pinctrl: add AMD GPIO driver support.") Signed-off-by: Kornel Dulęba <korneld@chromium.org> Link: https://lore.kernel.org/r/20230320093259.845178-1-korneld@chromium.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2023-03-28xen/netback: remove not needed test in xenvif_tx_build_gops()Juergen Gross
The tests for the number of grant mapping or copy operations reaching the array size of the operations buffer at the end of the main loop in xenvif_tx_build_gops() isn't needed. The loop can handle at maximum MAX_PENDING_REQS transfer requests, as XEN_RING_NR_UNCONSUMED_REQUESTS() is taking unsent responses into consideration, too. Remove the tests. Suggested-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Paul Durrant <paul@xen.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-03-28xen/netback: don't do grant copy across page boundaryJuergen Gross
Fix xenvif_get_requests() not to do grant copy operations across local page boundaries. This requires to double the maximum number of copy operations per queue, as each copy could now be split into 2. Make sure that struct xenvif_tx_cb doesn't grow too large. Cc: stable@vger.kernel.org Fixes: ad7f402ae4f4 ("xen/netback: Ensure protocol headers don't fall in the non-linear area") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Paul Durrant <paul@xen.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-03-28HID: intel-ish-hid: Fix kernel panic during warm resetTanu Malhotra
During warm reset device->fw_client is set to NULL. If a bus driver is registered after this NULL setting and before new firmware clients are enumerated by ISHTP, kernel panic will result in the function ishtp_cl_bus_match(). This is because of reference to device->fw_client->props.protocol_name. ISH firmware after getting successfully loaded, sends a warm reset notification to remove all clients from the bus and sets device->fw_client to NULL. Until kernel v5.15, all enabled ISHTP kernel module drivers were loaded right after any of the first ISHTP device was registered, regardless of whether it was a matched or an unmatched device. This resulted in all drivers getting registered much before the warm reset notification from ISH. Starting kernel v5.16, this issue got exposed after the change was introduced to load only bus drivers for the respective matching devices. In this scenario, cros_ec_ishtp device and cros_ec_ishtp driver are registered after the warm reset device fw_client NULL setting. cros_ec_ishtp driver_register() triggers the callback to ishtp_cl_bus_match() to match ISHTP driver to the device and causes kernel panic in guid_equal() when dereferencing fw_client NULL pointer to get protocol_name. Fixes: f155dfeaa4ee ("platform/x86: isthp_eclite: only load for matching devices") Fixes: facfe0a4fdce ("platform/chrome: chros_ec_ishtp: only load for matching devices") Fixes: 0d0cccc0fd83 ("HID: intel-ish-hid: hid-client: only load for matching devices") Fixes: 44e2a58cb880 ("HID: intel-ish-hid: fw-loader: only load for matching devices") Cc: <stable@vger.kernel.org> # 5.16+ Signed-off-by: Tanu Malhotra <tanu.malhotra@intel.com> Tested-by: Shaunak Saha <shaunak.saha@intel.com> Acked-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2023-03-28smsc911x: avoid PHY being resumed when interface is not upWolfram Sang
SMSC911x doesn't need mdiobus suspend/resume, that's why it sets 'mac_managed_pm'. However, setting it needs to be moved from init to probe, so mdiobus PM functions will really never be called (e.g. when the interface is not up yet during suspend/resume). Fixes: 3ce9f2bef755 ("net: smsc911x: Stop and start PHY during suspend and resume") Suggested-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230327083138.6044-1-wsa+renesas@sang-engineering.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>