summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2024-04-22cxgb4: flower: validate control flagsAsbjørn Sloth Tønnesen
This driver currently doesn't support any control flags. Use flow_rule_match_has_control_flags() to check for control flags, such as can be set through `tc flower ... ip_flags frag`. In case any control flags are masked, flow_rule_match_has_control_flags() sets a NL extended error message, and we return -EOPNOTSUPP. Only compile-tested. Only compile tested, no hardware available. Signed-off-by: Asbjørn Sloth Tønnesen <ast@fiberby.net> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240418161751.189226-1-ast@fiberby.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22clk: scmi: Add support for get/set duty_cycle operationsCristian Marussi
Provide the CLK framework callbacks related to get/set clock duty cycle if the related SCMI clock supports OEM extended configurations. CC: Michael Turquette <mturquette@baylibre.com> CC: Stephen Boyd <sboyd@kernel.org> CC: linux-clk@vger.kernel.org Signed-off-by: Cristian Marussi <cristian.marussi@arm.com> Link: https://lore.kernel.org/r/20240415163649.895268-6-cristian.marussi@arm.com Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2024-04-22clk: scmi: Add support for re-parenting restricted clocksCristian Marussi
Some exposed SCMI Clocks could be marked as non-supporting re-parenting changes. Configure a clk_ops descriptor which does not provide the re-parenting callbacks for such clocks when registering with CLK framework. CC: Michael Turquette <mturquette@baylibre.com> CC: Stephen Boyd <sboyd@kernel.org> CC: linux-clk@vger.kernel.org Signed-off-by: Cristian Marussi <cristian.marussi@arm.com> Link: https://lore.kernel.org/r/20240415163649.895268-5-cristian.marussi@arm.com Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2024-04-22clk: scmi: Add support for rate change restricted clocksCristian Marussi
Some exposed SCMI Clocks could be marked as non-supporting rate changes. Configure a clk_ops descriptors which does not provide the rate change callbacks for such clocks when registering with CLK framework. CC: Michael Turquette <mturquette@baylibre.com> CC: Stephen Boyd <sboyd@kernel.org> CC: linux-clk@vger.kernel.org Signed-off-by: Cristian Marussi <cristian.marussi@arm.com> Link: https://lore.kernel.org/r/20240415163649.895268-4-cristian.marussi@arm.com Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2024-04-22clk: scmi: Add support for state control restricted clocksCristian Marussi
Some exposed SCMI Clocks could be marked as non-supporting state changes. Configure a clk_ops descriptor which does not provide the state change callbacks for such clocks when registering with CLK framework. CC: Michael Turquette <mturquette@baylibre.com> CC: Stephen Boyd <sboyd@kernel.org> CC: linux-clk@vger.kernel.org Signed-off-by: Cristian Marussi <cristian.marussi@arm.com> Link: https://lore.kernel.org/r/20240415163649.895268-3-cristian.marussi@arm.com Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2024-04-22clk: scmi: Allocate CLK operations dynamicallyCristian Marussi
SCMI Clocks descriptors expose an increasing number of properties, thing which, in turn, leads to a varying set of supported CLK operations to be associated with each clock. Providing statically pre-defined CLK operations structs for all the possible combinations of allowed clock features is becoming cumbersome and error-prone. Allocate the per-clock operations descriptors dynamically and populate it with the strictly needed set of operations depending on the advertised clock properties: one descriptor is created for each distinct combination of clock operations, so minimizing the number of clk_ops structures to the strictly minimum needed. CC: Michael Turquette <mturquette@baylibre.com> CC: Stephen Boyd <sboyd@kernel.org> CC: linux-clk@vger.kernel.org Signed-off-by: Cristian Marussi <cristian.marussi@arm.com> Link: https://lore.kernel.org/r/20240415163649.895268-2-cristian.marussi@arm.com Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2024-04-22Merge branch 'for-uring-ubufops' into HEADJakub Kicinski
Pavel Begunkov says: ==================== implement io_uring notification (ubuf_info) stacking (net part) To have per request buffer notifications each zerocopy io_uring send request allocates a new ubuf_info. However, as an skb can carry only one uarg, it may force the stack to create many small skbs hurting performance in many ways. The patchset implements notification, i.e. an io_uring's ubuf_info extension, stacking. It attempts to link ubuf_info's into a list, allowing to have multiple of them per skb. liburing/examples/send-zerocopy shows up 6 times performance improvement for TCP with 4KB bytes per send, and levels it with MSG_ZEROCOPY. Without the patchset it requires much larger sends to utilise all potential. bytes | before | after (Kqps) 1200 | 195 | 1023 4000 | 193 | 1386 8000 | 154 | 1058 ==================== Link: https://lore.kernel.org/all/cover.1713369317.git.asml.silence@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22net: extend ubuf_info callback to ops structurePavel Begunkov
We'll need to associate additional callbacks with ubuf_info, introduce a structure holding ubuf_info callbacks. Apart from a more smarter io_uring notification management introduced in next patches, it can be used to generalise msg_zerocopy_put_abort() and also store ->sg_from_iter, which is currently passed in struct msghdr. Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/all/a62015541de49c0e2a8a0377a1d5d0a5aeb07016.1713369317.git.asml.silence@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22vfio/pci: fix potential memory leak in vfio_intx_enable()Ye Bin
If vfio_irq_ctx_alloc() failed will lead to 'name' memory leak. Fixes: 18c198c96a81 ("vfio/pci: Create persistent INTx handler") Signed-off-by: Ye Bin <yebin10@huawei.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/r/20240415015029.3699844-1-yebin10@huawei.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2024-04-22vfio/pci: Pass eventfd context object through irqfdAlex Williamson
Further avoid lookup of the context object by passing it through the irqfd data field. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240401195406.3720453-3-alex.williamson@redhat.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2024-04-22vfio/pci: Pass eventfd context to IRQ handlerAlex Williamson
Create a link back to the vfio_pci_core_device on the eventfd context object to avoid lookups in the interrupt path. The context is known valid in the interrupt handler. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240401195406.3720453-2-alex.williamson@redhat.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2024-04-23irqchip/stm32-exti: Mark events reserved with RIF configuration checkAntonio Borneo
EXTI events availability depends on Resource Isolation Framework (RIF) configuration. RIF grants access to buses with Compartment ID (CID) filtering, secure and privilege level. It also assigns EXTI events to one or several processors (CID, Secure, Privilege). EXTI events used by Linux must be CID-filtered (EnCIDCFGR.CFEN=1) and statically assigned to CID1 (EnCIDCFR.CID=CID1). EXTI events not filling these conditions are marked as reserved and can't be used by Linux. Signed-off-by: Antonio Borneo <antonio.borneo@foss.st.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20240415134926.1254428-7-antonio.borneo@foss.st.com
2024-04-23irqchip/stm32-exti: Skip secure eventsAntonio Borneo
Secure OS can reserve some EXTI events, marking them as "secure" by setting the corresponding bit in register SECCFGR (aka TZENR). These events cannot be used by Linux. Read the list of reserved events and check it during interrupt domain allocation. Signed-off-by: Antonio Borneo <antonio.borneo@foss.st.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20240415134926.1254428-6-antonio.borneo@foss.st.com
2024-04-23irqchip/stm32-exti: Convert driver to standard PMAntonio Borneo
All driver's dependencies for suspend/resume have been fixed long ago. There are no more reasons to use syscore PM for the part of this driver related to Cortex-A MPU. Switch to standard PM using NOIRQ_SYSTEM_SLEEP_PM_OPS, so all the registers of the interrupt controller get resumed before any irq gets enabled. A side effect of this change is to drop the only global variable 'stm32_host_data', used to keep the driver's data for syscore_ops. This makes the driver ready to support multiple EXTI instances. Signed-off-by: Antonio Borneo <antonio.borneo@foss.st.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20240415134926.1254428-5-antonio.borneo@foss.st.com
2024-04-23irqchip/stm32-exti: Map interrupts through interrupts-extendedAntonio Borneo
The mapping of EXTI events to its parent interrupt controller is both SoC and instance dependent. The current implementation requires adding a new mapping table to the driver's code and a new compatible for each new EXTI instance. Check for the presence of the optional interrupts-extended property and use it to map EXTI events to the parent's interrupts. For old device trees without the optional interrupts-extended property, the driver's behavior is unchanged, thus keeps backward compatibility. Signed-off-by: Antonio Borneo <antonio.borneo@foss.st.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20240415134926.1254428-4-antonio.borneo@foss.st.com
2024-04-23irqchip/stm32-exti: Fix minor indentation issueAntonio Borneo
Commit 046a6ee2343b ("irqchip: Bulk conversion to generic_handle_domain_irq()") incorrectly added a leading space character in the line indentation. Use only TAB for indentation, removing the leading space. Signed-off-by: Antonio Borneo <antonio.borneo@foss.st.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20240415134926.1254428-2-antonio.borneo@foss.st.com
2024-04-23irqchip/mxs: Declare icoll_handle_irq() as staticStefan Wahren
After commit 5bb578a0c1b8 ("ARM: 9298/1: Drop custom mdesc->handle_irq()") the function icoll_handle_irq() is only used within irq-mxs.c. So declare it as static to fix the warning about a missing prototype when building with W=1. Signed-off-by: Stefan Wahren <wahrenst@gmx.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2024-04-23irqchip/loongson-pch-pic: Update interrupt registration policyBaoqi Zhang
The current code is using a fixed mapping between the LS7A interrupt source and the HT interrupt vector. This prevents the utilization of the full interrupt vector space and therefore limits the number of interrupt source in a system. Replace the fixed mapping with a dynamic mapping which allocates a vector when an interrupt source is set up. This avoids that unused sources prevent vectors from being used for other devices. Introduce a mapping table in struct pch_pic, where each interrupt source will allocate an index as a 'hwirq' number from the table in the order of application and set table value as interrupt source number. This hwirq number will be configured as vector in the HT interrupt controller. For an interrupt source, the validity period of the obtained hwirq will last until the system reset. Co-developed-by: Biao Dong <dongbiao@loongson.cn> Signed-off-by: Biao Dong <dongbiao@loongson.cn> Co-developed-by: Tianyang Zhang <zhangtianyang@loongson.cn> Signed-off-by: Tianyang Zhang <zhangtianyang@loongson.cn> Signed-off-by: Baoqi Zhang <zhangbaoqi@loongson.cn> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20240422093830.27212-1-zhangtianyang@loongson.cn
2024-04-22net/mlx5e: Implement ethtool callbacks for supporting per-queue coalescingRahul Rameshbabu
Use mlx5 on-the-fly coalescing configuration support to enable individual channel configuration. Co-developed-by: Nabil S. Alramli <dev@nalramli.com> Signed-off-by: Nabil S. Alramli <dev@nalramli.com> Co-developed-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Reviewed-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240419080445.417574-6-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22net/mlx5e: Support updating coalescing configuration without resetting channelsRahul Rameshbabu
When CQE mode or DIM state is changed, gracefully reconfigure channels to handle new configuration. Previously, would create new channels that would reflect the changes rather than update the original channels. Co-developed-by: Nabil S. Alramli <dev@nalramli.com> Signed-off-by: Nabil S. Alramli <dev@nalramli.com> Co-developed-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240419080445.417574-5-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22net/mlx5e: Dynamically allocate DIM structure for SQs/RQsRahul Rameshbabu
Make it possible for the DIM structure to be torn down while an SQ or RQ is still active. Changing the CQ period mode is an example where the previous sampling done with the DIM structure would need to be invalidated. Co-developed-by: Nabil S. Alramli <dev@nalramli.com> Signed-off-by: Nabil S. Alramli <dev@nalramli.com> Co-developed-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240419080445.417574-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22net/mlx5e: Use DIM constants for CQ period mode parameterRahul Rameshbabu
Use core DIM CQ period mode enum values for the CQ parameter for the period mode. Translate the value to the specific mlx5 device constant for the selected period mode when creating a CQ. Avoid needing to translate mlx5 device constants to DIM constants for core DIM functionality. Co-developed-by: Nabil S. Alramli <dev@nalramli.com> Signed-off-by: Nabil S. Alramli <dev@nalramli.com> Co-developed-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240419080445.417574-3-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22net/mlx5e: Move DIM function declarations to en/dim.hRahul Rameshbabu
Create a header specifically for DIM-related declarations. Move existing DIM-specific functionality from en.h. Future DIM-related functionality will be declared in en/dim.h in subsequent patches. Co-developed-by: Nabil S. Alramli <dev@nalramli.com> Signed-off-by: Nabil S. Alramli <dev@nalramli.com> Co-developed-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240419080445.417574-2-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22net: dsa: vsc73xx: add structure descriptionsPawel Dembicki
This commit adds updates to the documentation describing the structures used in vsc73xx. This will help prevent kdoc-related issues in the future. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Pawel Dembicki <paweldembicki@gmail.com> Link: https://lore.kernel.org/r/20240417205048.3542839-6-paweldembicki@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22net: dsa: vsc73xx: Add define for max num of portsPawel Dembicki
This patch introduces a new define: VSC73XX_MAX_NUM_PORTS, which can be used in the future instead of a hardcoded value. Currently, the only hardcoded value is vsc->ds->num_ports. It is being replaced with the new define. Suggested-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: Pawel Dembicki <paweldembicki@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://lore.kernel.org/r/20240417205048.3542839-5-paweldembicki@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22net: dsa: vsc73xx: use macros for rgmii recognitionPawel Dembicki
It's preparation for future use. At this moment, the RGMII port is used only for a connection to the MAC interface, but in the future, someone could connect a PHY to it. Using the "phy_interface_mode_is_rgmii" macro allows for the proper recognition of all RGMII modes. Suggested-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Pawel Dembicki <paweldembicki@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://lore.kernel.org/r/20240417205048.3542839-4-paweldembicki@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22net: dsa: vsc73xx: convert to PHYLINKPawel Dembicki
This patch replaces the adjust_link api with the phylink apis that provide equivalent functionality. The remaining functionality from the adjust_link is now covered in the mac_link_* and mac_config from phylink_mac_ops structure. Removes: .adjust_link Adds phylink_mac_ops structure: .mac_config .mac_link_up .mac_link_down Signed-off-by: Pawel Dembicki <paweldembicki@gmail.com> Link: https://lore.kernel.org/r/20240417205048.3542839-3-paweldembicki@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22net: dsa: vsc73xx: use read_poll_timeout instead delay loopPawel Dembicki
Switch the delay loop during the Arbiter empty check from vsc73xx_adjust_link() to use read_poll_timeout(). Functionally, one msleep() call is eliminated at the end of the loop in the timeout case. As Russell King suggested: "This [change] avoids the issue that on the last iteration, the code reads the register, tests it, finds the condition that's being waiting for is false, _then_ waits and end up printing the error message - that last wait is rather useless, and as the arbiter state isn't checked after waiting, it could be that we had success during the last wait." Suggested-by: Russell King <linux@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Pawel Dembicki <paweldembicki@gmail.com> Link: https://lore.kernel.org/r/20240417205048.3542839-2-paweldembicki@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22NFC: trf7970a: disable all regulators on removalPaul Geurts
During module probe, regulator 'vin' and 'vdd-io' are used and enabled, but the vdd-io regulator overwrites the 'vin' regulator pointer. During remove, only the vdd-io is disabled, as the vin regulator pointer is not available anymore. When regulator_put() is called during resource cleanup a kernel warning is given, as the regulator is still enabled. Store the two regulators in separate pointers and disable both the regulators on module remove. Fixes: 49d22c70aaf0 ("NFC: trf7970a: Add device tree option of 1.8 Volt IO voltage") Signed-off-by: Paul Geurts <paul_geurts@live.nl> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/DB7PR09MB26847A4EBF88D9EDFEB1DA0F950E2@DB7PR09MB2684.eurprd09.prod.outlook.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22vDPA: code clean for vhost_vdpa uapiZhu Lingshan
This commit cleans up the uapi for vhost_vdpa by better naming some of the enums which report blk information to user space, and they are not in any official releases yet. Fixes: 1ac61ddfee93 ("vDPA: report virtio-blk flush info to user space") Fixes: ae1374b7f72c ("vDPA: report virtio-block read-only info to user space") Fixes: 330b8aea6924 ("vDPA: report virtio-block max segment size to user space") Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com> Message-Id: <20240415111047.1047774-1-lingshan.zhu@intel.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-04-22net: dsa: mv88e6xx: fix supported_interfaces setup in ↵Matthias Schiffer
mv88e6250_phylink_get_caps() With the recent PHYLINK changes requiring supported_interfaces to be set, MV88E6250 family switches like the 88E6020 fail to probe - cmode is never initialized on these devices, so mv88e6250_phylink_get_caps() does not set any supported_interfaces flags. Instead of a cmode, on 88E6250 we have a read-only port mode value that encodes similar information. There is no reason to bother mapping port mode to the cmodes of other switch models; instead we introduce a mv88e6250_setup_supported_interfaces() that is called directly from mv88e6250_phylink_get_caps(). Fixes: de5c9bf40c45 ("net: phylink: require supported_interfaces to be filled") Signed-off-by: Matthias Schiffer <matthias.schiffer@ew.tq-group.com> Link: https://lore.kernel.org/r/20240417103737.166651-1-matthias.schiffer@ew.tq-group.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-04-22IB/mlx5: Use __iowrite64_copy() for write combining storesJason Gunthorpe
mlx5 has a built in self-test at driver startup to evaluate if the platform supports write combining to generate a 64 byte PCIe TLP or not. This has proven necessary because a lot of common scenarios end up with broken write combining (especially inside virtual machines) and there is other way to learn this information. This self test has been consistently failing on new ARM64 CPU designs (specifically with NVIDIA Grace's implementation of Neoverse V2). The C loop around writeq() generates some pretty terrible ARM64 assembly, but historically this has worked on a lot of existing ARM64 CPUs till now. We see it succeed about 1 time in 10,000 on the worst effected systems. The CPU architects speculate that the load instructions interspersed with the stores makes the WC buffers statistically flush too often and thus the generation of large TLPs becomes infrequent. This makes the boot up test unreliable in that it indicates no write-combining, however userspace would be fine since it uses a ST4 instruction. Further, S390 has similar issues where only the special zpci_memcpy_toio() will actually generate large TLPs, and the open coded loop does not trigger it at all. Fix both ARM64 and S390 by switching to __iowrite64_copy() which now provides architecture specific variants that have a high change of generating a large TLP with write combining. x86 continues to use a similar writeq loop in the generate __iowrite64_copy(). Fixes: 11f552e21755 ("IB/mlx5: Test write combining support") Link: https://lore.kernel.org/r/6-v3-1893cd8b9369+1925-mlx5_arm_wc_jgg@nvidia.com Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Acked-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22net: hns3: Remove io_stop_wc() calls after __iowrite64_copy()Jason Gunthorpe
Now that the ARM64 arch implementation does the DGH as part of __iowrite64_copy() there is no reason to open code this in drivers. Link: https://lore.kernel.org/r/5-v3-1893cd8b9369+1925-mlx5_arm_wc_jgg@nvidia.com Reviewed-by: Jijie Shao<shaojijie@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22ice: Add tx_scheduling_layers devlink paramLukasz Czapnik
It was observed that Tx performance was inconsistent across all queues and/or VSIs and that it was directly connected to existing 9-layer topology of the Tx scheduler. Introduce new private devlink param - tx_scheduling_layers. This parameter gives user flexibility to choose the 5-layer transmit scheduler topology which helps to smooth out the transmit performance. Allowed parameter values are 5 and 9. Example usage: Show: devlink dev param show pci/0000:4b:00.0 name tx_scheduling_layers pci/0000:4b:00.0: name tx_scheduling_layers type driver-specific values: cmode permanent value 9 Set: devlink dev param set pci/0000:4b:00.0 name tx_scheduling_layers value 5 cmode permanent devlink dev param set pci/0000:4b:00.0 name tx_scheduling_layers value 9 cmode permanent Signed-off-by: Lukasz Czapnik <lukasz.czapnik@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Co-developed-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-04-22ice: Enable switching default Tx scheduler topologyMichal Wilczynski
Introduce support for Tx scheduler topology change, based on user selection, from default 9-layer to 5-layer. Change requires NVM (version 3.20 or newer) and DDP package (OS Package 1.3.30 or newer - available for over a year in linux-firmware, since commit aed71f296637 in linux-firmware ("ice: Update package to 1.3.30.0")) https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/commit/?id=aed71f296637 Enable 5-layer topology switch in init path of the driver. To accomplish that upload of the DDP package needs to be delayed, until change in Tx topology is finished. To trigger the Tx change user selection should be changed in NVM using devlink. Then the platform should be rebooted. Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com> Co-developed-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-04-22ice: Adjust the VSI/Aggregator layersRaj Victor
Adjust the VSI/Aggregator layers based on the number of logical layers supported by the FW. Currently the VSI and Aggregator layers are fixed based on the 9 layer scheduler tree layout. Due to performance reasons the number of layers of the scheduler tree is changing from 9 to 5. It requires a readjustment of these VSI/Aggregator layer values. Signed-off-by: Raj Victor <victor.raj@intel.com> Co-developed-by: Michal Wilczynski <michal.wilczynski@intel.com> Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com> Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-04-22ice: Support 5 layer topologyRaj Victor
There is a performance issue when the number of VSIs are not multiple of 8. This is caused due to the max children limitation per node(8) in 9 layer topology. The BW credits are shared evenly among the children by default. Assume one node has 8 children and the other has 1. The parent of these nodes share the BW credit equally among them. Apparently this causes a problem for the first node which has 8 children. The 9th VM get more BW credits than the first 8 VMs. Example: 1) With 8 VM's: for x in 0 1 2 3 4 5 6 7; do taskset -c ${x} netperf -P0 -H 172.68.169.125 & sleep .1 ; done tx_queue_0_packets: 23283027 tx_queue_1_packets: 23292289 tx_queue_2_packets: 23276136 tx_queue_3_packets: 23279828 tx_queue_4_packets: 23279828 tx_queue_5_packets: 23279333 tx_queue_6_packets: 23277745 tx_queue_7_packets: 23279950 tx_queue_8_packets: 0 2) With 9 VM's: for x in 0 1 2 3 4 5 6 7 8; do taskset -c ${x} netperf -P0 -H 172.68.169.125 & sleep .1 ; done tx_queue_0_packets: 24163396 tx_queue_1_packets: 24164623 tx_queue_2_packets: 24163188 tx_queue_3_packets: 24163701 tx_queue_4_packets: 24163683 tx_queue_5_packets: 24164668 tx_queue_6_packets: 23327200 tx_queue_7_packets: 24163853 tx_queue_8_packets: 91101417 So on average queue 8 statistics show that 3.7 times more packets were send there than to the other queues. The FW starting with version 3.20, has increased the max number of children per node by reducing the number of layers from 9 to 5. Reflect this on driver side. Signed-off-by: Raj Victor <victor.raj@intel.com> Co-developed-by: Michal Wilczynski <michal.wilczynski@intel.com> Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com> Co-developed-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-04-22devlink: extend devlink_param *set pointerMateusz Polchlopek
Extend devlink_param *set function pointer to take extack as a param. Sometimes it is needed to pass information to the end user from set function. It is more proper to use for that netlink instead of passing message to dmesg. Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-04-22RDMA/rxe: Let destroy qp succeed with stuck packetBob Pearson
In some situations a sent packet may get queued in the NIC longer than than timeout of a ULP. Currently if this happens the ULP may try to reset the link by destroying the qp and setting up an alternate connection but will fail because the rxe driver is waiting for the packet to finish getting sent and be returned to the skb destructor function where the qp reference holding things up will be dropped. This patch modifies the way that the qp is passed to the destructor to pass the qp index and not a qp pointer. Then the destructor will attempt to lookup the qp from its index and if it fails exit early. This requires taking a reference on the struct sock rather than the qp allowing the qp to be destroyed while the sk is still around waiting for the packet to finish. Link: https://lore.kernel.org/r/20240329145513.35381-15-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Get rid of pkt resend on errBob Pearson
Currently the rxe_driver detects packet drops by ip_local_out() which occur before the packet is sent on the wire and attempts to resend them. This is redundant with the usual retry mechanism which covers packets that get dropped in transit to or from the remote node. The way this is implemented is not robust since it sets need_req_skb and waits for the number of local skbs outstanding for this qp to drop below a low water mark. This is racy since the skb may be sent to the destructor before the requester can set the need_req_skb flag. This will cause a deadlock in the send path for that qp. This patch removes this mechanism since the normal retry path will correct the error and resend the packet and it makes no difference if the packet is dropped locally or later. Link: https://lore.kernel.org/r/20240329145513.35381-14-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Make rxe_loopback match rxe_send behaviorBob Pearson
The rxe send path currently counts the number of skbs outstanding between the rxe driver and the ethernet driver to prevent too many packets to accumulate waiting to send. This patch makes the local loopback path behave the same way. The loopback path forwards the packets to the receive path which will eventually call kfree_skb on all packets and drop the qp references. This makes the loopback path more useful for software testing. Link: https://lore.kernel.org/r/20240329145513.35381-13-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Fix incorrect rxe_put in error pathBob Pearson
In rxe_send() a ref is taken on the qp to keep it alive until the kfree_skb() has a chance to call the skb destructor rxe_skb_tx_dtor() which drops the reference. If the packet has an incorrect protocol the error path just calls kfree_skb() which will call the destructor which will drop the ref. Currently the driver also calls rxe_put() which is incorrect. Additionally since the packets sent to rxe_send() are under the control of the driver and it only ever produces IPV4 or IPV6 packets the simplest fix is to remove all the code in this block. Link: https://lore.kernel.org/r/20240329145513.35381-12-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Fixes: 9eb7f8e44d13 ("IB/rxe: Move refcounting earlier in rxe_send()") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Don't call direct between tasksBob Pearson
Replace calls to rxe_run_task() with rxe_sched_task(). This prevents the tasks from all running on the same cpu. This change slightly reduces performance for single qp send and write benchmarks in loopback mode but greatly improves the performance with multiple qps because if run task is used all the work tends to be performed on one cpu. For actual on the wire benchmarks there is no noticeable performance change. Link: https://lore.kernel.org/r/20240329145513.35381-11-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Don't call rxe_requester from rxe_completerBob Pearson
Instead of rescheduling rxe_requester from rxe_completer() just extend the duration of rxe_sender() by one pass. Setting run_requester_again forces rxe_completer() to return 0 which will cause rxe_sender() to be called at least one more time. Link: https://lore.kernel.org/r/20240329145513.35381-10-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Don't schedule rxe_completer from rxe_requesterBob Pearson
Now that rxe_completer() is always called serially after rxe_requester() there is no reason to schedule rxe_completer() from rxe_requester(). Link: https://lore.kernel.org/r/20240329145513.35381-9-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Remove save/rollback_state in rxe_requesterBob Pearson
Now that req.task and comp.task are merged it is no longer necessary to call save_state() before calling rxe_xmit_pkt() and rollback_state() if rxe_xmit_pkt() fails. This was done originally to prevent races between rxe_completer() and rxe_requester() which now cannot happen. Link: https://lore.kernel.org/r/20240329145513.35381-8-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Merge request and complete tasksBob Pearson
Currently the rxe driver has three work queue tasks per qp. These are the req.task, comp.task and resp.task which call rxe_requester(), rxe_completer() and rxe_responder() respectively directly or on work queues. Each of these subroutines checks to see if there is work to be performed on the send queue or on the response packet queue or the request packet queue and will run until there is no work remaining or yield the cpu and reschedule itself until there is no work remaining. This commit combines the req.task and comp.task into a single send.task and renames the resp.task to the recv.task. The combined send.task calls rxe_requester() and rxe_completer() serially and continues until all work on both the send queue and the response packet queue are done. In various benchmarks the performance is either improved or left the same. At high scale there is a significant reduction in the load on the cpu. This is the first step in combining these two tasks. Once they are serialized cross rescheduling of req.task and comp.task can be more efficiently handled by just letting the send.task continue to run. This will be done in the next several patches. Link: https://lore.kernel.org/r/20240329145513.35381-7-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Remove redundant scheduling of rxe_completerBob Pearson
In rxe_post_send_kernel() if the qp is in the error state after posting the work requests the rxe_completer() task is scheduled. But, the only way to move the qp into the error state is to call rxe_qp_error() which also schedules the rxe_completer() task to drain the queues. Calling it a second time has no effect. This commit removes the redundant call. Link: https://lore.kernel.org/r/20240329145513.35381-6-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Allow good work requests to be executedBob Pearson
A previous commit incorrectly added an 'if(!err)' before scheduling the requester task in rxe_post_send_kernel(). But if there were send wrs successfully added to the send queue before a bad wr they might never get executed. This commit fixes this by scheduling the requester task if any wqes were successfully posted in rxe_post_send_kernel() in rxe_verbs.c. Link: https://lore.kernel.org/r/20240329145513.35381-5-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Fixes: 5bf944f24129 ("RDMA/rxe: Add error messages") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-04-22RDMA/rxe: Fix seg fault in rxe_comp_queue_pktBob Pearson
In rxe_comp_queue_pkt() an incoming response packet skb is enqueued to the resp_pkts queue and then a decision is made whether to run the completer task inline or schedule it. Finally the skb is dereferenced to bump a 'hw' performance counter. This is wrong because if the completer task is already running in a separate thread it may have already processed the skb and freed it which can cause a seg fault. This has been observed infrequently in testing at high scale. This patch fixes this by changing the order of enqueuing the packet until after the counter is accessed. Link: https://lore.kernel.org/r/20240329145513.35381-4-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Fixes: 0b1e5b99a48b ("IB/rxe: Add port protocol stats") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>