summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-05-13amd-xgbe: add support for new pci device id 0x1641Raju Rangoju
Add support for new pci device id 0x1641 to register Renoir device with PCIe. Signed-off-by: Raju Rangoju <Raju.Rangoju@amd.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250509155325.720499-6-Raju.Rangoju@amd.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13amd-xgbe: Add XGBE_XPCS_ACCESS_V3 support to xgbe_pci_probe()Raju Rangoju
A new version of XPCS access routines have been introduced, add the support to xgbe_pci_probe() to use these routines. Signed-off-by: Raju Rangoju <Raju.Rangoju@amd.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250509155325.720499-5-Raju.Rangoju@amd.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13amd-xgbe: add support for new XPCS routinesRaju Rangoju
Add the necessary support to enable Renoir ethernet device. Since the BAR1 address cannot be used to access the XPCS registers on Renoir, use the smn functions. Some of the ethernet add-in-cards have dual PHY but share a single MDIO line (between the ports). In such cases, link inconsistencies are noticed during the heavy traffic and during reboot stress tests. Using smn calls helps avoid such race conditions. Suggested-by: Sudheesh Mavila <sudheesh.mavila@amd.com> Signed-off-by: Raju Rangoju <Raju.Rangoju@amd.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250509155325.720499-4-Raju.Rangoju@amd.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13amd-xgbe: reorganize the xgbe_pci_probe() code pathRaju Rangoju
Reorganize the xgbe_pci_probe() code path to convert if/else statements to switch case to help add future code. This helps code look cleaner. Signed-off-by: Raju Rangoju <Raju.Rangoju@amd.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250509155325.720499-3-Raju.Rangoju@amd.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13amd-xgbe: reorganize the code of XPCS accessRaju Rangoju
The xgbe_{read/write}_mmd_regs_v* functions have common code which can be moved to helper functions. Add new helper functions to calculate the mmd_address for v1/v2 of xpcs access. Signed-off-by: Raju Rangoju <Raju.Rangoju@amd.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250509155325.720499-2-Raju.Rangoju@amd.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13Merge branch 'tools-ynl-gen-support-sub-types-for-binary-attributes'Paolo Abeni
Jakub Kicinski says: ==================== tools: ynl-gen: support sub-types for binary attributes Binary attributes have sub-type annotations which either indicate that the binary object should be interpreted as a raw / C array of a simple type (e.g. u32), or that it's a struct. Use this information in the C codegen instead of outputting void * for all binary attrs. It doesn't make a huge difference in the genl families, but in classic Netlink there is a lot more structs. v1: https://lore.kernel.org/20250508022839.1256059-1-kuba@kernel.org ==================== Link: https://patch.msgid.link/20250509154213.1747885-1-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13tools: ynl-gen: support struct for binary attributesJakub Kicinski
Support using a struct pointer for binary attrs. Len field is maintained because the structs may grow with newer kernel versions. Or, which matters more, be shorter if the binary is built against newer uAPI than kernel against which it's executed. Since we are storing a pointer to a struct type - always allocate at least the amount of memory needed by the struct per current uAPI headers (unused mem is zeroed). Technically users should check the length field but per modern ASAN checks storing a short object under a pointer seems like a bad idea. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Link: https://patch.msgid.link/20250509154213.1747885-4-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13tools: ynl-gen: auto-indent elseJakub Kicinski
We auto-indent if statements (increase the indent of the subsequent line by 1), do the same thing for else branches without a block. There hasn't been any else branches before but we're about to add one. Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Link: https://patch.msgid.link/20250509154213.1747885-3-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13tools: ynl-gen: support sub-type for binary attributesJakub Kicinski
Sub-type annotation on binary attributes may indicate that the attribute carries an array of simple types (also referred to as "C array" in docs). Support rendering them as such in the C user code. For example for u32, instead of: struct { u32 arr; } _len; void *arr; render: struct { u32 arr; } _count; __u32 *arr; Note that count is the number of elements while len was the length in bytes. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Link: https://patch.msgid.link/20250509154213.1747885-2-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13gpio: virtuser: fix potential out-of-bound writeMarkus Burri
If the caller wrote more characters, count is truncated to the max available space in "simple_write_to_buffer". Check that the input size does not exceed the buffer size. Write a zero termination afterwards. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202505091754.285hHbr2-lkp@intel.com/ Signed-off-by: Markus Burri <markus.burri@mt.com> Link: https://lore.kernel.org/r/20250509150459.115489-1-markus.burri@mt.com Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2025-05-13gpio: pca953x: fix IRQ storm on system wake upEmanuele Ghidoli
If an input changes state during wake-up and is used as an interrupt source, the IRQ handler reads the volatile input register to clear the interrupt mask and deassert the IRQ line. However, the IRQ handler is triggered before access to the register is granted, causing the read operation to fail. As a result, the IRQ handler enters a loop, repeatedly printing the "failed reading register" message, until `pca953x_resume()` is eventually called, which restores the driver context and enables access to registers. Fix by disabling the IRQ line before entering suspend mode, and re-enabling it after the driver context is restored in `pca953x_resume()`. An IRQ can be disabled with disable_irq() and still wake the system as long as the IRQ has wake enabled, so the wake-up functionality is preserved. Fixes: b76574300504 ("gpio: pca953x: Restore registers after suspend/resume cycle") Cc: stable@vger.kernel.org Signed-off-by: Emanuele Ghidoli <emanuele.ghidoli@toradex.com> Signed-off-by: Francesco Dolcini <francesco.dolcini@toradex.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20250512095441.31645-1-francesco@dolcini.it Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2025-05-13wifi: iwlwifi: cfg: reduce configuration struct sizeJohannes Berg
We don't need the CORES() match nor jacket (which really doesn't even make sense to match to the RF anyway), and since the subdevice masks we care about are contiguous, we can encode them as highest and lowest bit set (automatically.) By encoding whether to match or not as separate flags and taking advantage of the limited range of the RF type, step and ID we can reduce the amount of memory needed for the table, while also making the logic (apart perhaps from the subdevice mask) easier to understand. This reduces the size of the module by about 1.5KiB on x86-64. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com> Link: https://patch.msgid.link/20250511195137.38a805a7c96f.Ieece00476cea6054b0827cd075eb8ba5943373df@changeid
2025-05-13Merge branch 'device-memory-tcp-tx'Paolo Abeni
Mina Almasry says: ==================== Device memory TCP TX The TX path had been dropped from the Device Memory TCP patch series post RFCv1 [1], to make that series slightly easier to review. This series rebases the implementation of the TX path on top of the net_iov/netmem framework agreed upon and merged. The motivation for the feature is thoroughly described in the docs & cover letter of the original proposal, so I don't repeat the lengthy descriptions here, but they are available in [1]. Full outline on usage of the TX path is detailed in the documentation included with this series. Test example is available via the kselftest included in the series as well. The series is relatively small, as the TX path for this feature largely piggybacks on the existing MSG_ZEROCOPY implementation. Patch Overview: --------------- 1. Documentation & tests to give high level overview of the feature being added. 1. Add netmem refcounting needed for the TX path. 2. Devmem TX netlink API. 3. Devmem TX net stack implementation. 4. Make dma-buf unbinding scheduled work to handle TX cases where it gets freed from contexts where we can't sleep. 5. Add devmem TX documentation. 6. Add scaffolding enabling driver support for netmem_tx. Add helpers, driver feature flag, and docs to enable drivers to declare netmem_tx support. 7. Guard netmem_tx against being enabled against drivers that don't support it. 8. Add devmem_tx selftests. Add TX path to ncdevmem and add a test to devmem.py. Testing: -------- Testing is very similar to devmem TCP RX path. The ncdevmem test used for the RX path is now augemented with client functionality to test TX path. * Test Setup: Kernel: net-next with this RFC and memory provider API cherry-picked locally. Hardware: Google Cloud A3 VMs. NIC: GVE with header split & RSS & flow steering support. Performance results are not included with this version, unfortunately. I'm having issues running the dma-buf exporter driver against the upstream kernel on my test setup. The issues are specific to that dma-buf exporter and do not affect this patch series. I plan to follow up this series with perf fixes if the tests point to issues once they're up and running. Special thanks to Stan who took a stab at rebasing the TX implementation on top of the netmem/net_iov framework merged. Parts of his proposal [2] that are reused as-is are forked off into their own patches to give full credit. [1] https://lore.kernel.org/netdev/20240909054318.1809580-1-almasrymina@google.com/ [2] https://lore.kernel.org/netdev/20240913150913.1280238-2-sdf@fomichev.me/T/#m066dd407fbed108828e2c40ae50e3f4376ef57fd Cc: sdf@fomichev.me Cc: asml.silence@gmail.com Cc: dw@davidwei.uk Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: Victor Nogueira <victor@mojatatu.com> Cc: Pedro Tammela <pctammela@mojatatu.com> Cc: Samiullah Khawaja <skhawaja@google.com> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> v14: https://lore.kernel.org/netdev/20250429032645.363766-1-almasrymina@google.com/ v13: https://lore.kernel.org/netdev/20250425204743.617260-1-almasrymina@google.com/ v12: https://lore.kernel.org/netdev/20250423031117.907681-1-almasrymina@google.com/ v11: https://lore.kernel.org/netdev/20250423031117.907681-1-almasrymina@google.com/ v10: https://lore.kernel.org/netdev/20250417231540.2780723-1-almasrymina@google.com/ v9: https://lore.kernel.org/netdev/20250415224756.152002-1-almasrymina@google.com/ v8: https://lore.kernel.org/netdev/20250308214045.1160445-1-almasrymina@google.com/ v7: https://lore.kernel.org/netdev/20250227041209.2031104-1-almasrymina@google.com/ v6: https://lore.kernel.org/netdev/20250222191517.743530-1-almasrymina@google.com/ v5: https://lore.kernel.org/netdev/20250220020914.895431-1-almasrymina@google.com/ v4: https://lore.kernel.org/netdev/20250203223916.1064540-1-almasrymina@google.com/ v3: https://patchwork.kernel.org/project/netdevbpf/list/?series=929401&state=* RFC v2: https://patchwork.kernel.org/project/netdevbpf/list/?series=920056&state=* ==================== Link: https://patch.msgid.link/20250508004830.4100853-1-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13selftests: ncdevmem: Implement devmem TCP TXMina Almasry
Add support for devmem TX in ncdevmem. This is a combination of the ncdevmem from the devmem TCP series RFCv1 which included the TX path, and work by Stan to include the netlink API and refactored on top of his generic memory_provider support. Signed-off-by: Mina Almasry <almasrymina@google.com> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250508004830.4100853-10-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13net: check for driver support in netmem TXMina Almasry
We should not enable netmem TX for drivers that don't declare support. Check for driver netmem TX support during devmem TX binding and fail if the driver does not have the functionality. Check for driver support in validate_xmit_skb as well. Signed-off-by: Mina Almasry <almasrymina@google.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250508004830.4100853-9-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13gve: add netmem TX support to GVE DQO-RDA modeMina Almasry
Use netmem_dma_*() helpers in gve_tx_dqo.c DQO-RDA paths to enable netmem TX support in that mode. Declare support for netmem TX in GVE DQO-RDA mode. Signed-off-by: Mina Almasry <almasrymina@google.com> Acked-by: Harshitha Ramamurthy <hramamurthy@google.com> Link: https://patch.msgid.link/20250508004830.4100853-8-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13net: enable driver support for netmem TXMina Almasry
Drivers need to make sure not to pass netmem dma-addrs to the dma-mapping API in order to support netmem TX. Add helpers and netmem_dma_*() helpers that enables special handling of netmem dma-addrs that drivers can use. Document in netmem.rst what drivers need to do to support netmem TX. Signed-off-by: Mina Almasry <almasrymina@google.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250508004830.4100853-7-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13net: add devmem TCP TX documentationMina Almasry
Add documentation outlining the usage and details of the devmem TCP TX API. Signed-off-by: Mina Almasry <almasrymina@google.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250508004830.4100853-6-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13net: devmem: Implement TX pathMina Almasry
Augment dmabuf binding to be able to handle TX. Additional to all the RX binding, we also create tx_vec needed for the TX path. Provide API for sendmsg to be able to send dmabufs bound to this device: - Provide a new dmabuf_tx_cmsg which includes the dmabuf to send from. - MSG_ZEROCOPY with SCM_DEVMEM_DMABUF cmsg indicates send from dma-buf. Devmem is uncopyable, so piggyback off the existing MSG_ZEROCOPY implementation, while disabling instances where MSG_ZEROCOPY falls back to copying. We additionally pipe the binding down to the new zerocopy_fill_skb_from_devmem which fills a TX skb with net_iov netmems instead of the traditional page netmems. We also special case skb_frag_dma_map to return the dma-address of these dmabuf net_iovs instead of attempting to map pages. The TX path may release the dmabuf in a context where we cannot wait. This happens when the user unbinds a TX dmabuf while there are still references to its netmems in the TX path. In that case, the netmems will be put_netmem'd from a context where we can't unmap the dmabuf, Resolve this by making __net_devmem_dmabuf_binding_free schedule_work'd. Based on work by Stanislav Fomichev <sdf@fomichev.me>. A lot of the meat of the implementation came from devmem TCP RFC v1[1], which included the TX path, but Stan did all the rebasing on top of netmem/net_iov. Cc: Stanislav Fomichev <sdf@fomichev.me> Signed-off-by: Kaiyuan Zhang <kaiyuanz@google.com> Signed-off-by: Mina Almasry <almasrymina@google.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250508004830.4100853-5-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13net: devmem: TCP tx netlink apiStanislav Fomichev
Add bind-tx netlink call to attach dmabuf for TX; queue is not required, only ifindex and dmabuf fd for attachment. Signed-off-by: Stanislav Fomichev <sdf@fomichev.me> Signed-off-by: Mina Almasry <almasrymina@google.com> Link: https://patch.msgid.link/20250508004830.4100853-4-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13net: add get_netmem/put_netmem supportMina Almasry
Currently net_iovs support only pp ref counts, and do not support a page ref equivalent. This is fine for the RX path as net_iovs are used exclusively with the pp and only pp refcounting is needed there. The TX path however does not use pp ref counts, thus, support for get_page/put_page equivalent is needed for netmem. Support get_netmem/put_netmem. Check the type of the netmem before passing it to page or net_iov specific code to obtain a page ref equivalent. For dmabuf net_iovs, we obtain a ref on the underlying binding. This ensures the entire binding doesn't disappear until all the net_iovs have been put_netmem'ed. We do not need to track the refcount of individual dmabuf net_iovs as we don't allocate/free them from a pool similar to what the buddy allocator does for pages. This code is written to be extensible by other net_iov implementers. get_netmem/put_netmem will check the type of the netmem and route it to the correct helper: pages -> [get|put]_page() dmabuf net_iovs -> net_devmem_[get|put]_net_iov() new net_iovs -> new helpers Signed-off-by: Mina Almasry <almasrymina@google.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250508004830.4100853-3-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13netmem: add niov->type attribute to distinguish different net_iov typesMina Almasry
Later patches in the series adds TX net_iovs where there is no pp associated, so we can't rely on niov->pp->mp_ops to tell what is the type of the net_iov. Add a type enum to the net_iov which tells us the net_iov type. Signed-off-by: Mina Almasry <almasrymina@google.com> Link: https://patch.msgid.link/20250508004830.4100853-2-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13ALSA: sh: SND_AICA should depend on SH_DMA_APIGeert Uytterhoeven
If CONFIG_SH_DMA_API=n: WARNING: unmet direct dependencies detected for G2_DMA Depends on [n]: SH_DREAMCAST [=y] && SH_DMA_API [=n] Selected by [y]: - SND_AICA [=y] && SOUND [=y] && SND [=y] && SND_SUPERH [=y] && SH_DREAMCAST [=y] SND_AICA selects G2_DMA. As the latter depends on SH_DMA_API, the former should depend on SH_DMA_API, too. Fixes: f477a538c14d07f8 ("sh: dma: fix kconfig dependency for G2_DMA") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202505131320.PzgTtl9H-lkp@intel.com/ Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://patch.msgid.link/b90625f8a9078d0d304bafe862cbe3a3fab40082.1747121335.git.geert+renesas@glider.be Signed-off-by: Takashi Iwai <tiwai@suse.de>
2025-05-13ALSA: usb-audio: Add sample rate quirk for Audioengine D1Christian Heusel
A user reported on the Arch Linux Forums that their device is emitting the following message in the kernel journal, which is fixed by adding the quirk as submitted in this patch: > kernel: usb 1-2: current rate 8436480 is different from the runtime rate 48000 There also is an entry for this product line added long time ago. Their specific device has the following ID: $ lsusb | grep Audio Bus 001 Device 002: ID 1101:0003 EasyPass Industrial Co., Ltd Audioengine D1 Link: https://bbs.archlinux.org/viewtopic.php?id=305494 Fixes: 93f9d1a4ac593 ("ALSA: usb-audio: Apply sample rate quirk for Audioengine D1") Cc: stable@vger.kernel.org Signed-off-by: Christian Heusel <christian@heusel.eu> Link: https://patch.msgid.link/20250512-audioengine-quirk-addition-v1-1-4c370af6eff7@heusel.eu Signed-off-by: Takashi Iwai <tiwai@suse.de>
2025-05-13Merge branch 'address-eee-regressions-on-ksz-switches-since-v6-9-v6-14'Paolo Abeni
Oleksij Rempel says: ==================== address EEE regressions on KSZ switches since v6.9 (v6.14+) This patch series addresses a regression in Energy Efficient Ethernet (EEE) handling for KSZ switches with integrated PHYs, introduced in kernel v6.9 by commit fe0d4fd9285e ("net: phy: Keep track of EEE configuration"). The first patch updates the DSA driver to allow phylink to properly manage PHY EEE configuration. Since integrated PHYs handle LPI internally and ports without integrated PHYs do not document MAC-level LPI support, dummy MAC LPI callbacks are provided. The second patch removes outdated EEE workarounds from the micrel PHY driver, as they are no longer needed with correct phylink handling. This series addresses the regression for mainline and kernels starting from v6.14. It is not easily possible to fully fix older kernels due to missing infrastructure changes. Tested on KSZ9893 hardware. ==================== Link: https://patch.msgid.link/20250504081434.424489-1-o.rempel@pengutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13net: phy: micrel: remove KSZ9477 EEE quirks now handled by phylinkOleksij Rempel
The KSZ9477 PHY driver contained workarounds for broken EEE capability advertisements by manually masking supported EEE modes and forcibly disabling EEE if MICREL_NO_EEE was set. With proper MAC-side EEE handling implemented via phylink, these quirks are no longer necessary. Remove MICREL_NO_EEE handling and the use of ksz9477_get_features(). This simplifies the PHY driver and avoids duplicated EEE management logic. Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Cc: stable@vger.kernel.org # v6.14+ Link: https://patch.msgid.link/20250504081434.424489-3-o.rempel@pengutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13net: dsa: microchip: let phylink manage PHY EEE configuration on KSZ switchesOleksij Rempel
Phylink expects MAC drivers to provide LPI callbacks to properly manage Energy Efficient Ethernet (EEE) configuration. On KSZ switches with integrated PHYs, LPI is internally handled by hardware, while ports without integrated PHYs have no documented MAC-level LPI support. Provide dummy mac_disable_tx_lpi() and mac_enable_tx_lpi() callbacks to satisfy phylink requirements. Also, set default EEE capabilities during phylink initialization where applicable. Since phylink can now gracefully handle optional EEE configuration, remove the need for the MICREL_NO_EEE PHY flag. This change addresses issues caused by incomplete EEE refactoring introduced in commit fe0d4fd9285e ("net: phy: Keep track of EEE configuration"). It is not easily possible to fix all older kernels, but this patch ensures proper behavior on latest kernels and can be considered for backporting to stable kernels starting from v6.14. Fixes: fe0d4fd9285e ("net: phy: Keep track of EEE configuration") Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Cc: stable@vger.kernel.org # v6.14+ Link: https://patch.msgid.link/20250504081434.424489-2-o.rempel@pengutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-05-13nvmet: pci-epf: remove NVMET_PCI_EPF_Q_IS_SQDamien Le Moal
The flag NVMET_PCI_EPF_Q_IS_SQ is set but never used. Remove it. Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2025-05-13nvmet: pci-epf: improve debug messageDamien Le Moal
Improve the debug message of nvmet_pci_epf_create_cq() to indicate if a completion queue IRQ is disabled. Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Niklas Cassel <cassel@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2025-05-13nvmet: pci-epf: cleanup nvmet_pci_epf_raise_irq()Damien Le Moal
There is no point in taking the controller irq_lock and calling nvmet_pci_epf_should_raise_irq() for a completion queue which does not have IRQ enabled (NVMET_PCI_EPF_Q_IRQ_ENABLED flag is not set). Move the test for the NVMET_PCI_EPF_Q_IRQ_ENABLED flag out of nvmet_pci_epf_should_raise_irq() to the top of nvmet_pci_epf_raise_irq() to return early when no IRQ should be raised. Also, use dev_err_ratelimited() to avoid a message storm under load when raising IRQs is failing. Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Niklas Cassel <cassel@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2025-05-13nvmet: pci-epf: do not fall back to using INTX if not supportedDamien Le Moal
Some endpoint PCIe controllers do not support raising legacy INTX interrupts. This support is indicated by the intx_capable field of struct pci_epc_features. Modify nvmet_pci_epf_raise_irq() to not automatically fallback to trying raising an INTX interrupt after an MSI or MSI-X error if the controller does not support INTX. Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2025-05-13nvmet: pci-epf: clear completion queue IRQ flag on deleteDamien Le Moal
The function nvmet_pci_epf_delete_cq() unconditionally calls nvmet_pci_epf_remove_irq_vector() even for completion queues that do not have interrupts enabled. Furthermore, for completion queues that do have IRQ enabled, deleting and re-creating the completion queue leaves the flag NVMET_PCI_EPF_Q_IRQ_ENABLED set, even if the completion queue is being re-created with IRQ disabled. Fix these issues by calling nvmet_pci_epf_remove_irq_vector() only if NVMET_PCI_EPF_Q_IRQ_ENABLED is set and make sure to always clear that flag. Fixes: 0faa0fe6f90e ("nvmet: New NVMe PCI endpoint function target driver") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2025-05-13nvme-pci: acquire cq_poll_lock in nvme_poll_irqdisableKeith Busch
We need to lock this queue for that condition because the timeout work executes per-namespace and can poll the poll CQ. Reported-by: Hannes Reinecke <hare@kernel.org> Closes: https://lore.kernel.org/all/20240902130728.1999-1-hare@kernel.org/ Fixes: a0fa9647a54e ("NVMe: add blk polling support") Signed-off-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Daniel Wagner <wagi@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2025-05-13nvme-pci: make nvme_pci_npages_prp() __always_inlineKees Cook
The only reason nvme_pci_npages_prp() could be used as a compile-time known result in BUILD_BUG_ON() is because the compiler was always choosing to inline the function. Under special circumstances (sanitizer coverage functions disabled for __init functions on ARCH=um), the compiler decided to stop inlining it: drivers/nvme/host/pci.c: In function 'nvme_init': include/linux/compiler_types.h:557:45: error: call to '__compiletime_assert_678' declared with attribute error: BUILD_BUG_ON failed: nvme_pci_npages_prp() > NVME_MAX_NR_ALLOCATIONS 557 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) | ^ include/linux/compiler_types.h:538:25: note: in definition of macro '__compiletime_assert' 538 | prefix ## suffix(); \ | ^~~~~~ include/linux/compiler_types.h:557:9: note: in expansion of macro '_compiletime_assert' 557 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) | ^~~~~~~~~~~~~~~~~~~ include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert' 39 | #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg) | ^~~~~~~~~~~~~~~~~~ include/linux/build_bug.h:50:9: note: in expansion of macro 'BUILD_BUG_ON_MSG' 50 | BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition) | ^~~~~~~~~~~~~~~~ drivers/nvme/host/pci.c:3804:9: note: in expansion of macro 'BUILD_BUG_ON' 3804 | BUILD_BUG_ON(nvme_pci_npages_prp() > NVME_MAX_NR_ALLOCATIONS); | ^~~~~~~~~~~~ Force it to be __always_inline to make sure it is always available for use with BUILD_BUG_ON(). Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202505061846.12FMyRjj-lkp@intel.com/ Fixes: c372cdd1efdf ("nvme-pci: iod npages fits in s8") Signed-off-by: Kees Cook <kees@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2025-05-12scsi: sd_zbc: block: Respect bio vector limits for REPORT ZONES bufferSteve Siwinski
The REPORT ZONES buffer size is currently limited by the HBA's maximum segment count to ensure the buffer can be mapped. However, the block layer further limits the number of iovec entries to 1024 when allocating a bio. To avoid allocation of buffers too large to be mapped, further restrict the maximum buffer size to BIO_MAX_INLINE_VECS. Replace the UIO_MAXIOV symbolic name with the more contextually appropriate BIO_MAX_INLINE_VECS. Fixes: b091ac616846 ("sd_zbc: Fix report zones buffer allocation") Cc: stable@vger.kernel.org Signed-off-by: Steve Siwinski <ssiwinski@atto.com> Link: https://lore.kernel.org/r/20250508200122.243129-1-ssiwinski@atto.com Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2025-05-12net: dsa: b53: implement setting ageing timeJonas Gorski
b53 supported switches support configuring ageing time between 1 and 1,048,575 seconds, so add an appropriate setter. This allows b53 to pass the FDB learning test for both vlan aware and vlan unaware bridges. Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://patch.msgid.link/20250510092211.276541-1-jonas.gorski@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12net: mlx4: add SOF_TIMESTAMPING_TX_SOFTWARE flag when getting ts infoJason Xing
As mlx4 has implemented skb_tx_timestamp() in mlx4_en_xmit(), the SOFTWARE flag is surely needed when users are trying to get timestamp information. Signed-off-by: Jason Xing <kernelxing@tencent.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20250510093442.79711-1-kerneljasonxing@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12netlink: fix policy dump for int with validation callbackJakub Kicinski
Recent devlink change added validation of an integer value via NLA_POLICY_VALIDATE_FN, for sparse enums. Handle this in policy dump. We can't extract any info out of the callback, so report only the type. Fixes: 429ac6211494 ("devlink: define enum for attr types of dynamic attributes") Reported-by: syzbot+01eb26848144516e7f0a@syzkaller.appspotmail.com Link: https://patch.msgid.link/20250509212751.1905149-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12Merge branch 'for-next' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux Tony Nguyen says: ==================== Prepare for Intel IPU E2000 (GEN3) This is the first part in introducing RDMA support for idpf. ---------------------------------------------------------------- Tatyana Nikolova says: To align with review comments, the patch series introducing RDMA RoCEv2 support for the Intel Infrastructure Processing Unit (IPU) E2000 line of products is going to be submitted in three parts: 1. Modify ice to use specific and common IIDC definitions and pass a core device info to irdma. 2. Add RDMA support to idpf and modify idpf to use specific and common IIDC definitions and pass a core device info to irdma. 3. Add RDMA RoCEv2 support for the E2000 products, referred to as GEN3 to irdma. This first part is a 5 patch series based on the original "iidc/ice/irdma: Update IDC to support multiple consumers" patch to allow for multiple CORE PCI drivers, using the auxbus. Patches: 1) Move header file to new name for clarity and replace ice specific DSCP define with a kernel equivalent one in irdma 2) Unify naming convention 3) Separate header file into common and driver specific info 4) Replace ice specific DSCP define with a kernel equivalent one in ice 5) Implement core device info struct and update drivers to use it ---------------------------------------------------------------- v1: https://lore.kernel.org/20250505212037.2092288-1-anthony.l.nguyen@intel.com IWL reviews: [v5] https://lore.kernel.org/20250416021549.606-1-tatyana.e.nikolova@intel.com [v4] https://lore.kernel.org/20250225050428.2166-1-tatyana.e.nikolova@intel.com [v3] https://lore.kernel.org/20250207194931.1569-1-tatyana.e.nikolova@intel.com [v2] https://lore.kernel.org/20240824031924.421-1-tatyana.e.nikolova@intel.com [v1] https://lore.kernel.org/20240724233917.704-1-tatyana.e.nikolova@intel.com * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux: iidc/ice/irdma: Update IDC to support multiple consumers ice: Replace ice specific DSCP mapping num with a kernel define iidc/ice/irdma: Break iidc.h into two headers iidc/ice/irdma: Rename to iidc_* convention iidc/ice/irdma: Rename IDC header file ==================== Link: https://patch.msgid.link/20250509200712.2911060-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12Merge branch 'net-vertexcom-mse102x-improve-rx-handling'Jakub Kicinski
Stefan Wahren says: ==================== net: vertexcom: mse102x: Improve RX handling This series is the second part of two series for the Vertexcom driver. It contains some improvements for the RX handling of the Vertexcom MSE102x. ==================== Link: https://patch.msgid.link/20250509120435.43646-1-wahrenst@gmx.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12net: vertexcom: mse102x: Simplify mse102x_rx_pkt_spiStefan Wahren
The function mse102x_rx_pkt_spi is used in two cases: * initial polling to re-arm RX interrupt * level based RX interrupt handler Both of them doesn't need an open-coded retry mechanism. In the first case the function can be called again, if the return code is IRQ_NONE. This keeps the error behavior during netdev open. In the second case the proper retry would be handled implicit by the SPI interrupt. So drop the retry code and simplify the receive path. Signed-off-by: Stefan Wahren <wahrenst@gmx.net> Link: https://patch.msgid.link/20250509120435.43646-7-wahrenst@gmx.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12net: vertexcom: mse102x: Return code for mse102x_rx_pkt_spiStefan Wahren
The MSE102x doesn't provide any interrupt register, so the only way to handle the level interrupt is to fetch the whole packet from the MSE102x internal buffer via SPI. So in cases the interrupt handler fails to do this, it should return IRQ_NONE. This allows the core to disable the interrupt in case the issue persists and prevent an interrupt storm. Signed-off-by: Stefan Wahren <wahrenst@gmx.net> Link: https://patch.msgid.link/20250509120435.43646-6-wahrenst@gmx.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12net: vertexcom: mse102x: Implement flag for valid CMDStefan Wahren
After removal of the invalid command counter only a relevant debug message is left, which can be cumbersome. So add a new flag to debugfs, which indicates whether the driver has ever received a valid CMD. This helps to differentiate between general and temporary receive issues. Signed-off-by: Stefan Wahren <wahrenst@gmx.net> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20250509120435.43646-5-wahrenst@gmx.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12net: vertexcom: mse102x: Drop invalid cmd statsStefan Wahren
There are several reasons for an invalid command response by the MSE102x: * SPI line interferences * MSE102x is in reset or has no firmware * MSE102x is busy * no packet in MSE102x receive buffer So the counter for invalid command isn't very helpful without further context. So drop the confusing statistics counter, but keep the debug messages about "unexpected response" in order to debug possible hardware issues. Signed-off-by: Stefan Wahren <wahrenst@gmx.net> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20250509120435.43646-4-wahrenst@gmx.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12net: vertexcom: mse102x: Add warning about IRQ trigger typeStefan Wahren
The example of the initial DT binding of the Vertexcom MSE 102x suggested a IRQ_TYPE_EDGE_RISING, which is wrong. So warn everyone to fix their device tree to level based IRQ. Signed-off-by: Stefan Wahren <wahrenst@gmx.net> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20250509120435.43646-3-wahrenst@gmx.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12dt-bindings: vertexcom-mse102x: Fix IRQ type in exampleStefan Wahren
According to the MSE102x documentation the trigger type is a high level. Signed-off-by: Stefan Wahren <wahrenst@gmx.net> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Conor Dooley <conor.dooley@microchip.com> Link: https://patch.msgid.link/20250509120435.43646-2-wahrenst@gmx.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12net: dsa: sja1105: discard incoming frames in BR_STATE_LISTENINGVladimir Oltean
It has been reported that when under a bridge with stp_state=1, the logs get spammed with this message: [ 251.734607] fsl_dpaa2_eth dpni.5 eth0: Couldn't decode source port Further debugging shows the following info associated with packets: source_port=-1, switch_id=-1, vid=-1, vbid=1 In other words, they are data plane packets which are supposed to be decoded by dsa_tag_8021q_find_port_by_vbid(), but the latter (correctly) refuses to do so, because no switch port is currently in BR_STATE_LEARNING or BR_STATE_FORWARDING - so the packet is effectively unexpected. The error goes away after the port progresses to BR_STATE_LEARNING in 15 seconds (the default forward_time of the bridge), because then, dsa_tag_8021q_find_port_by_vbid() can correctly associate the data plane packets with a plausible bridge port in a plausible STP state. Re-reading IEEE 802.1D-1990, I see the following: "4.4.2 Learning: (...) The Forwarding Process shall discard received frames." IEEE 802.1D-2004 further clarifies: "DISABLED, BLOCKING, LISTENING, and BROKEN all correspond to the DISCARDING port state. While those dot1dStpPortStates serve to distinguish reasons for discarding frames, the operation of the Forwarding and Learning processes is the same for all of them. (...) LISTENING represents a port that the spanning tree algorithm has selected to be part of the active topology (computing a Root Port or Designated Port role) but is temporarily discarding frames to guard against loops or incorrect learning." Well, this is not what the driver does - instead it sets mac[port].ingress = true. To get rid of the log spam, prevent unexpected data plane packets to be received by software by discarding them on ingress in the LISTENING state. In terms of blame attribution: the prints only date back to commit d7f9787a763f ("net: dsa: tag_8021q: add support for imprecise RX based on the VBID"). However, the settings would permit a LISTENING port to forward to a FORWARDING port, and the standard suggests that's not OK. Fixes: 640f763f98c2 ("net: dsa: sja1105: Add support for Spanning Tree Protocol") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20250509113816.2221992-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12net: Lock lower level devices when updating featuresCosmin Ratiu
__netdev_update_features() expects the netdevice to be ops-locked, but it gets called recursively on the lower level netdevices to sync their features, and nothing locks those. This commit fixes that, with the assumption that it shouldn't be possible for both higher-level and lover-level netdevices to require the instance lock, because that would lead to lock dependency warnings. Without this, playing with higher level (e.g. vxlan) netdevices on top of netdevices with instance locking enabled can run into issues: WARNING: CPU: 59 PID: 206496 at ./include/net/netdev_lock.h:17 netif_napi_add_weight_locked+0x753/0xa60 [...] Call Trace: <TASK> mlx5e_open_channel+0xc09/0x3740 [mlx5_core] mlx5e_open_channels+0x1f0/0x770 [mlx5_core] mlx5e_safe_switch_params+0x1b5/0x2e0 [mlx5_core] set_feature_lro+0x1c2/0x330 [mlx5_core] mlx5e_handle_feature+0xc8/0x140 [mlx5_core] mlx5e_set_features+0x233/0x2e0 [mlx5_core] __netdev_update_features+0x5be/0x1670 __netdev_update_features+0x71f/0x1670 dev_ethtool+0x21c5/0x4aa0 dev_ioctl+0x438/0xae0 sock_ioctl+0x2ba/0x690 __x64_sys_ioctl+0xa78/0x1700 do_syscall_64+0x6d/0x140 entry_SYSCALL_64_after_hwframe+0x4b/0x53 </TASK> Fixes: 7e4d784f5810 ("net: hold netdev instance lock during rtnetlink operations") Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250509072850.2002821-1-cratiu@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12net: phy: dp83867: use 2ns delay if not specified in DTBMatthias Schiffer
Most PHY drivers default to a 2ns delay if internal delay is requested and no value is specified. Having a default value makes sense, as it allows a Device Tree to only care about board design (whether there are delays on the PCB or not), and not whether the delay is added on the MAC or the PHY side when needed. Whether the delays are actually applied is controlled by the DP83867_RGMII_*_CLK_DELAY_EN flags, so the behavior is only changed in configurations that would previously be rejected with -EINVAL. Suggested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Matthias Schiffer <matthias.schiffer@ew.tq-group.com> Link: https://patch.msgid.link/e2509b248a11ee29ea408a50c231da4c1fa0863b.1746612711.git.matthias.schiffer@ew.tq-group.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-05-12net: phy: dp83867: remove check of delay strap configurationMatthias Schiffer
The check that intended to handle "rgmii" PHY mode differently to the RGMII modes with internal delay never worked as intended: - added in commit 2a10154abcb7 ("net: phy: dp83867: Add TI dp83867 phy"): logic error caused the condition to always evaluate to true - changed in commit a46fa260f6f5 ("net: phy: dp83867: Fix warning check for setting the internal delay"): now the condition incorrectly evaluates to false for rgmii-txid - removed in commit 2b892649254f ("net: phy: dp83867: Set up RGMII TX delay") Around the time of the removal, commit c11669a2757e ("net: phy: dp83867: Rework delay rgmii delay handling") started clearing the delay enable flags in RGMIICTL. The change attempted to preserve the historical behavior of not disabling internal delays with "rgmii" PHY mode and also documented this in a comment, but due to a conflict between "Set up RGMII TX delay" and "Rework delay rgmii delay handling", the behavior dp83867_verify_rgmii_cfg() warned about (and that was also described in a comment in dp83867_config_init()) disappeared in the following merge of net into net-next in commit b4b12b0d2f02 ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net"). While is doesn't appear that this breaking change was intentional, it has been like this since 2019, and the new behavior to disable the delays with "rgmii" PHY mode is generally desirable - in particular with MAC drivers that have to fix up the delay mode, resulting in the PHY driver not even seeing the same mode that was specified in the Device Tree. Remove the obsolete check and comment. Signed-off-by: Matthias Schiffer <matthias.schiffer@ew.tq-group.com> Link: https://patch.msgid.link/8a286207cd11b460bb0dbd27931de3626b9d7575.1746612711.git.matthias.schiffer@ew.tq-group.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>