summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-06-16libeth: xdp, xsk: access adjacent u32s as u64 where applicableAlexander Lobakin
On 64-bit systems, writing/reading one u64 is faster than two u32s even when they're are adjacent in a struct. The compilers won't guarantee they will combine those; I observed both successful and unsuccessful attempts with both GCC and Clang, and it's not easy to say what it depends on. There's a few places in libeth_xdp winning up to several percent from combined access (both performance and object code size, especially when unrolling). Add __LIBETH_WORD_ACCESS and use it there on LE. Drivers are free to optimize HW-specific callbacks under the same definition. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xsk: add XSkFQ refill and XSk wakeup helpersAlexander Lobakin
XSkFQ refill is pretty generic across the drivers minus FQ descriptor filling and can easily be unified with one inline callback. XSk wakeup is usually not, but here, instead of commonly used "SW interrupts", I picked firing an IPI. In most tests, it showed better performance; it also provides better control for userspace on which CPU will handle the xmit, as SW interrupts honor IRQ affinity no matter which core produces XSk xmit descs (while XDPSQs are associated 1:1 with cores having the same ID). Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xsk: add XSk Rx processing supportAlexander Lobakin
Add XSk counterparts for preparing XSk &libeth_xdp_buff (adding head and frags), running the program, and handling the verdict, inc. XDP_PASS. Shortcuts in comparison with regular Rx: frags and all verdicts except XDP_REDIRECT are under unlikely() and out of line; no checks for XDP program presence as it's always true for XSk. Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # optimizations Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xsk: add XSk xmit functionsAlexander Lobakin
Reuse core sending functions to send XSk xmit frames. Both metadata and no metadata pools/driver are supported. libeth_xdp also provides generic XSk metadata ops, currently with the checksum offload only and for cases when HW doesn't require supplying L3/L4 checksum offsets. Drivers are free to pass their own ops. &libeth_xdp_tx_bulk is not used here as it would be redundant; pool->tx_descs are accessed directly. Fake "libeth_xsktmo" is needed to hide implementation details from the drivers when they want to use the generic ops: the original struct is defined in the same file where dev->xsk_tx_metadata_ops gets set to avoid duplication of slowpath; at the same time; XSk xmit functions use local "fast" copy to inline XMO callbacks. Tx descriptor filling loop is unrolled by 8. Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # optimizations Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xsk: add XSk XDP_TX sending helpersAlexander Lobakin
Add Xsk counterparts for XDP_TX buffer sending and completion. The same base structures and functions used from the libeth_xdp core, with adjustments to that XSk Rx always operates on &xdp_buff_xsk for both head and frags. And unlike regular Rx, here unlikely() are used for frags, as the header split gives no benefits for XSk Rx, at least for now. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add RSS hash hint and XDP features setup helpersAlexander Lobakin
End the XDP section by adding helpers to setup XDP features, flipping .ndo_xdp_xmit() support at runtime (in case when it's not always on), and calculating the queue clean/refill threshold. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add templates for building driver-side callbacksAlexander Lobakin
Defining driver-specific functions to pass to libeth_xdp functions can induce boilerplates and/or look a bit cryptic with all those layers of indirection. On the other hand, this indirection is needed to allow compilers to uninline big functions even when passed to __always_inline helpers (too much inlining also hurts performance in some cases), plus to reuse some XDP helpers in XSk code. Add macros to quickly build them, with the detailed kdoc. They take names of the actual callbacks for filling a Tx descriptor and other purely HW-specific things and wrap them appropriately. LIBETH_XDP_DEFINE_{BEGIN,END}() is needed for GCC 8+ unfortunately to let the drivers control which functions will be static and which global without hitting `-Wold-style-declaration`. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add XDP prog run and verdict result handlingAlexander Lobakin
Running a prog and handling the verdicts, up to napi_gro_receive() is also pretty generic code not really differing between vendors (except for Tx descriptor filling and Rx descriptor parsing). Define a couple inlines to do that. The inline callbacks a driver needs to pass is mentioned above: Tx descriptor filling for XDP_TX, populating skb with the descriptor data for XDP_PASS, finalizing XDPSQs after the polling loop for XDP_TX (kicking the HW to start sending). The populate callback passes only &libeth_xdp_buff assuming buff::desc pointer is enough, plus you can always get the corresponding Rx queue structure via container_of(buff::rxq). If not, a driver can extend the buff with more fields directly on the stack without touching libeth_xdp definitions. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add helpers for preparing/processing &libeth_xdp_buffAlexander Lobakin
Add convenience helpers to build an &xdp_buff. This means: general initialization before the NAPI loop, adding head, adding frags etc. libeth_xdp_process_buff() is the same what everybody have in their drivers: dma_sync_for_cpu(); if (!frag) { add_head(); prefetch(); } else { add_frag(); } Note that I don't use net_prefetch(), sticking to the original prefetch(). In none of my tests prefetching 128 bytes yielded better perf than 64 bytes. That might differ if the headers are huge enough, but then additional tunneling etc. overhead takes place, you either way won't win a lot. &libeth_xdp_stash is for cases when you exit the polling loop without finishing building the buff. If that happens, you need to store the buffer in the queue structure until the next loop and then restore it. It makes no sense to place a whole full &xdp_buff there. Define a minimal structure, which would store only the fields essential to restore it. I was able to pack it into 16 bytes, which is only 8 bytes bigger than `struct sk_buff *skb` on x64. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add XDPSQ cleanup timersAlexander Lobakin
When XDP Tx queues are not interrupt-driven but use lazy cleaning, i.e. only when there are less than `threshold` free descriptors left, we also need cleanup timers to avoid &xdp_buff and &xdp_frame stall for too long, especially with Page Pool (it warns every about inflight pages every 60 second). Let's say we sent 256 frames and don't need to send more, but we clean only when the number of pending items >= 384. In that case, those 256 will stall until 128 more are sent. For this, add simple helpers to run a timer which will clean the queue regardless, after 1 second of the last send. The timer is triggered when finalizing the queue. As long as there is regular active traffic, the timer doesn't fire. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add XDPSQ locking helpersAlexander Lobakin
Unfortunately, it's not always possible to allocate max(num_rxqs, nr_cpu_ids) even on hi-end NICs. To mitigate this, add simple locking helpers to libeth_xdp. As long as XDPSQs are not shared, the whole functionality is gated behind a static lock. Otherwise, each bulk flush locks the queue for the time of cleaning and filling the descriptors. As long as this particular queue is not used by more than 1 CPU, the impact is minimal (runtime check for boolean twice per 16+ descriptors). Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # static key Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add XDPSQE completion helpersAlexander Lobakin
Similarly to libeth_tx_complete(), add libeth_xdp_complete_tx() to handle XDP_TX and xmit buffers. Both use bulk return under the hood. Also add out of line libeth_tx_complete_any() which handles both regular and XDP frames (if libeth_xdp is loaded), for example, to call on queue destroy, where we don't need inlining but convenience. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add .ndo_xdp_xmit() helpersAlexander Lobakin
Add helpers for implementing .ndo_xdp_xmit(). Same as for XDP_TX, accumulate up to 16 DMA-mapped frames on the stack, then flush. If DMA mapping is failed for some reason, don't try mapping further frames, but still flush what was already prepared. DMA address of a head frame is stored in its headroom, assuming it has enough of it for an 8 (or 4) byte value. In addition to @prep and @xmit driver callbacks in XDP_TX, xmit also needs @finalize to kick the XDPSQ after filling. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: xdp: add XDP_TX buffers sendingAlexander Lobakin
Start adding XDP-specific code to libeth, namely handling XDP_TX buffers (only sending). The idea is that we accumulate up to 16 buffers on the stack, then, if either the limit is reached or the polling is finished, flush them at once with only one XDPSQ cleaning (if needed). The main sending function will be aware of the sending budget and already have all the info to send the buffers, so it can't fail. Drivers need to provide 2 inline callbacks to the main sending function: for cleaning an XDPSQ and for filling descriptors; the library code takes care of the rest. Note that unlike the generic code, multi-buffer support is not wrapped here with unlikely() to not hurt header split setups. &libeth_xdp_buff is a simple extension over &xdp_buff which has a direct pointer to the corresponding Rx descriptor (and, luckily, precisely 1 CL size and 16-byte alignment on x86_64). Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # xmit logic Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: support native XDP and register memory modelAlexander Lobakin
Expand libeth's Page Pool functionality by adding native XDP support. This means picking the appropriate headroom and DMA direction. Also, register all the created &page_pools as XDP memory models. A driver then can call xdp_rxq_info_attach_page_pool() when registering its RxQ info. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth: convert to netmemAlexander Lobakin
Back when the libeth Rx core was initially written, devmem was a draft and netmem_ref didn't exist in the mainline. Now that it's here, make libeth MP-agnostic before introducing any new code or any new library users. When it's known that the created PP/FQ is for header buffers, use faster "unsafe" underscored netmem <--> virt accessors as netmem_is_net_iov() is always false in that case, but consumes some cycles (bit test + true branch). Reviewed-by: Mina Almasry <almasrymina@google.com> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-16libeth, libie: clean symbol exports up a littleAlexander Lobakin
Change EXPORT_SYMBOL_NS_GPL(x, "LIBETH") to EXPORT_SYMBOL_GPL(x) + DEFAULT_SYMBOL_NAMESPACE "LIBETH" to make the code more compact. Also, explicitly include <linux/export.h> to satisfy new requirements from scripts/misc-check. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2025-06-14net: stmmac: qcom-ethqos: add ethqos_pcs_set_inband()Russell King (Oracle)
Add ethqos_pcs_set_inband() to improve readability, and to allow future changes when phylink PCS support is properly merged. Reviewed-by: Andrew Halaney <ahalaney@redhat.com> Tested-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> # sa8775p-ride-r3 Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/E1uPkbO-004EyA-EU@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-14net: sysfs: Implement is_visible for phys_(port_id, port_name, switch_id)Yajun Deng
phys_port_id_show, phys_port_name_show and phys_switch_id_show would return -EOPNOTSUPP if the netdev didn't implement the corresponding method. There is no point in creating these files if they are unsupported. Put these attributes in netdev_phys_group and implement the is_visible method. make phys_(port_id, port_name, switch_id) invisible if the netdev dosen't implement the corresponding method. Signed-off-by: Yajun Deng <yajun.deng@linux.dev> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250612142707.4644-1-yajun.deng@linux.dev Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-14net: ti: icssg-prueth: Read firmware-names from device treeMD Danish Anwar
Refactor the way firmware names are handled for the ICSSG PRUETH driver. Instead of using hardcoded firmware name arrays for different modes (EMAC, SWITCH, HSR), the driver now reads the firmware names from the device tree property "firmware-name". Only the EMAC firmware names are specified in the device tree property. The firmware names for all other supported modes are generated dynamically based on the EMAC firmware names by replacing substrings (e.g., "eth" with "sw" or "hsr") as appropriate. Example: Below are the firmwares used currently for PRU0 core EMAC: ti-pruss/am65x-sr2-pru0-prueth-fw.elf SW : ti-pruss/am65x-sr2-pru0-prusw-fw.elf HSR : ti-pruss/am65x-sr2-pru0-pruhsr-fw.elf All three firmware names are same except for the operating mode. In general for PRU0 core, firmware name is, ti-pruss/am65x-sr2-pru0-pru<mode>-fw.elf Since the EMAC firmware names are defined in DT, driver will read those directly and for other modes swap the mode name. i.e. eth -> sw or eth -> hsr. This preserves backwards compatibility as ICSSG driver is supported only by AM65x and AM64x. Both of these have "firmware-name" property populated in their device tree. Signed-off-by: MD Danish Anwar <danishanwar@ti.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250613064547.44394-1-danishanwar@ti.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-14net: amt: convert to use secs_to_jiffiesYuesong Li
Since secs_to_jiffies()(commit:b35108a51cf7) has been introduced, we can use it to avoid scaling the time to msec. Signed-off-by: Yuesong Li <liyuesong@vivo.com> Reviewed-by: Joe Damato <joe@dama.to> Reviewed-by: Taehee Yoo <ap420073@gmail.com> Link: https://patch.msgid.link/20250613102014.3070898-1-liyuesong@vivo.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13Merge branch 'net-stmmac-rk-much-needed-cleanups'Jakub Kicinski
Russell King says: ==================== net: stmmac: rk: much needed cleanups This series starts attacking the reams of fairly identical duplicated code in dwmac-rk. Every new SoC that comes along seems to need more code added to this file because e.g. the way the clock is controlled is different in every SoC. The first thing to realise is that the driver only supports RMII and RGMII interface modes. So, the first patch adds a .get_interfaces() implementation which reports this for phylink's usage, thus ensuring that we error out during initialisation should something that isn't supported be specified. Note that there is one case where there are a pair of interfaces, one supports only RMII the other supports RMII and RGMII, but we report both anyway - something that the existing driver allows. A future patch may attempt to fix this. Rather than writing code, let's realise that there are two major implementations here: 1. a struct clk that needs to be set. 2. writing a register with settings for RGMII and RMII speeds. Provide implementations for these, Also realise that as a result of doing this, we can kill off the .set_rgmii_speed() and .set_rmii_speed() methods by combining them together - indeed, this is what later SoCs already do by pointing both these methods at the same function. Overall, this patch series shrinks the file LOC by almost 8.7% by removing 175 lines from over 2000 lines. Apart from the error reporting changing and restricting interface modes to those that the driver supports, no functional change is anticipated with this patch. However, I have no hardware to test this. ==================== Link: https://patch.msgid.link/aEr1BhIoC6-UM2XV@shell.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: stmmac: rk: remove obsolete .set_*_speed() methodsRussell King (Oracle)
Now that no SoC implements the .set_*_speed() methods, we can get rid of these methods and the now unused code in rk_set_clk_tx_rate(). Arrange for the function to return an error when the .set_speed() method is not implemented. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1uPk3O-004CFx-Ir@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: stmmac: rk: convert px30_set_rmii_speed() to .set_speed()Russell King (Oracle)
Convert px30_set_rmii_speed() to use the common .set_speed() method, which eliminates another user of the older .set_*_speed() methods. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1uPk3J-004CFr-FE@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: stmmac: rk: simplify px30_set_rmii_speed()Russell King (Oracle)
px30_set_rmii_speed() doesn't need to be as verbose as it is - it merely needs the values for the register and clock rate which depend on the speed, and then call the appropriate functions. Rewrite the function to make it so. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1uPk3E-004CFl-BZ@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: stmmac: rk: combine .set_*_speed() methodsRussell King (Oracle)
As a result of the previous patches, many of the .set_rgmii_speed() and .set_rmii_speed() implementations are identical apart from the interface mode. Add a new .set_speed() function which takes the interface mode in addition to the speed, and use it to combine the separate implementations, calling the common rk_set_reg_speed() function. Also convert rk_set_clk_mac_speed() to be called by this new method pointer, rather than having these implementations called from both .set_*_speed() methods. Remove all the error messages from the .set_speed() methods, as these return an error code which is propagated up to stmmac_mac_link_up() which will print the error. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1uPk39-004CFf-7a@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: stmmac: rk: combine clk_mac_speed rate setting functionsRussell King (Oracle)
rk3568_set_gmac_speed() and rv1126_set_clk_mac_speed() are now identical. Combine these so we have a single copy of this code. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1uPk34-004CFZ-3y@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: stmmac: rk: combine rv1126 set_*_speed() methodsRussell King (Oracle)
Just like rk3568, there is no need to have separate RGMII and RMII methods to set clk_mac_speed() as rgmii_clock() can be used to return the clock rate for both RGMII and RMII interface modes. Combine these two methods. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1uPk2z-004CFT-0e@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: stmmac: rk: add struct for programming register based speedsRussell King (Oracle)
There is a common pattern in the driver where many SoCs need to write a single register with a value dependent on the interface mode and speed. Rather than having a lot of repeated code, add some common functions and a struct to contain the values to be written to a register to select the RGMII and RMII speeds. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1uPk2t-004CFN-Td@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: stmmac: rk: simplify set_*_speed()Russell King (Oracle)
Rather than having lots of regmap_write()s to the same register but with different values depending on the speed, reorganise the functions to use a local variable for the value, and then have one regmap_write() call to write it to the register. This reduces the amount of code and is a step towards further reducing the code size. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1uPk2o-004CFH-Q4@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: stmmac: rk: add get_interfaces() implementationRussell King (Oracle)
RK platforms support RGMII and/or RMII depending on the SoC. Detect whether support for a SoC exists by whether the interface specific set_to functions have been populated, and set the appropriate bits in phylink's bitmap of interfaces. This assumes all dwmac interfaces on a SoC have identical support, but it should be noted that this is not true for RK3528 which only supports RGMII on GMAC1. However, the existing code structure permits RGMII to be configured on GMAC0 without complaint, so preserve this behaviour even though it is incorrect to avoid functional change. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1uPk2j-004CF6-Mf@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13Merge branch 'dpll-add-all-inputs-phase-offset-monitor'Jakub Kicinski
Arkadiusz Kubalewski says: ==================== dpll: add all inputs phase offset monitor Add dpll device level feature: phase offset monitor. Phase offset measurement is typically performed against the current active source. However, some DPLL (Digital Phase-Locked Loop) devices may offer the capability to monitor phase offsets across all available inputs. The attribute and current feature state shall be included in the response message of the ``DPLL_CMD_DEVICE_GET`` command for supported DPLL devices. In such cases, users can also control the feature using the ``DPLL_CMD_DEVICE_SET`` command by setting the ``enum dpll_feature_state`` values for the attribute. Once enabled the phase offset measurements for the input shall be returned in the ``DPLL_A_PIN_PHASE_OFFSET`` attribute. Implement feature support in ice driver for dpll-enabled devices. Verify capability: $ ./tools/net/ynl/pyynl/cli.py \ --spec Documentation/netlink/specs/dpll.yaml \ --dump device-get [{'clock-id': 4658613174691613800, 'id': 0, 'lock-status': 'locked-ho-acq', 'mode': 'automatic', 'mode-supported': ['automatic'], 'module-name': 'ice', 'type': 'eec'}, {'clock-id': 4658613174691613800, 'id': 1, 'lock-status': 'locked-ho-acq', 'mode': 'automatic', 'mode-supported': ['automatic'], 'module-name': 'ice', 'phase-offset-monitor': 'disable', 'type': 'pps'}] Enable the feature: $ ./tools/net/ynl/pyynl/cli.py \ --spec Documentation/netlink/specs/dpll.yaml \ --do device-set --json '{"id":1, "phase-offset-monitor":"enable"}' Verify feature is enabled: $ ./tools/net/ynl/pyynl/cli.py \ --spec Documentation/netlink/specs/dpll.yaml \ --dump device-get [ [...] {'capabilities': {'all-inputs-phase-offset-monitor'}, 'clock-id': 4658613174691613800, 'id': 1, [...] 'phase-offset-monitor': 'enable', [...]] v6: - rebase. ==================== Link: https://patch.msgid.link/20250612152835.1703397-1-arkadiusz.kubalewski@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13ice: add phase offset monitor for all PPS dpll inputsArkadiusz Kubalewski
Implement a new admin command and helper function to handle and obtain CGU measurements for input pins. Add new callback operations to control the dpll device-level feature "phase offset monitor," allowing it to be enabled or disabled. If the feature is enabled, provide users with measured phase offsets and notifications. Initialize PPS DPLL with new callback operations if the feature is supported by the firmware. Reviewed-by: Milena Olech <milena.olech@intel.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Acked-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Link: https://patch.msgid.link/20250612152835.1703397-4-arkadiusz.kubalewski@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13dpll: add phase_offset_monitor_get/set callback opsArkadiusz Kubalewski
Add new callback operations for a dpll device: - phase_offset_monitor_get(..) - to obtain current state of phase offset monitor feature from dpll device, - phase_offset_monitor_set(..) - to allow feature configuration. Obtain the feature state value using the get callback and provide it to the user if the device driver implements callbacks. Execute the set callback upon user requests. Reviewed-by: Milena Olech <milena.olech@intel.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Acked-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Link: https://patch.msgid.link/20250612152835.1703397-3-arkadiusz.kubalewski@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13dpll: add phase-offset-monitor feature to netlink specArkadiusz Kubalewski
Add enum dpll_feature_state for control over features. Add dpll device level attribute: DPLL_A_PHASE_OFFSET_MONITOR - to allow control over a phase offset monitor feature. Attribute is present and shall return current state of a feature (enum dpll_feature_state), if the device driver provides such capability, otherwie attribute shall not be present. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Milena Olech <milena.olech@intel.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Acked-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Link: https://patch.msgid.link/20250612152835.1703397-2-arkadiusz.kubalewski@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: stmmac: improve .set_clk_tx_rate() method error messageRussell King (Oracle)
Improve the .set_clk_tx_rate() method error message to include the PHY interface mode along with the speed, which will be helpful to the RK implementations. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/E1uPjjx-0049r5-NN@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: phy: improve rgmii_clock() documentationRussell King (Oracle)
Improve the rgmii_clock() documentation to indicate that it can also be used for MII, GMII and RMII modes as well as RGMII as the required clock rates are identical, but note that it won't error out for 1G speeds for MII and RMII. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/E1uPjjk-0049pI-MD@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: pfcp: fix typo in message_priority field nameRubenKelevra
The field is spelled "message_priprity" in the big-endian bit-field definition. Nothing in-tree currently references the member, so the typo does not break kernel builds, but it is clearly incorrect. Signed-off-by: RubenKelevra <rubenkelevra@gmail.com> Link: https://patch.msgid.link/20250612145012.185321-1-rubenkelevra@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13Merge branch 'dp83tg720-reduce-link-recovery'Jakub Kicinski
Oleksij Rempel says: ==================== dp83tg720: Reduce link recovery This patch series improves the link recovery behavior of the TI DP83TG720 PHY driver. Previously, we introduced randomized reset delay logic to avoid reset collisions in multi-PHY setups. While this approach was functional, it had notable drawbacks: unpredictable behavior, longer and more variable link recovery times, and overall higher complexity in link handling. With this new approach, we replace the randomized delay with deterministic, role-specific delays in the PHY reset logic. This enables us to: - Remove the redundant empirical 600 ms delay in read_status() - Drop the random polling interval logic - Introduce a clean, adaptive polling strategy with consistent behavior and improved responsiveness As a result, the PHY is now able to recover link reliably in under 1000_ms ==================== Link: https://patch.msgid.link/20250612104157.2262058-1-o.rempel@pengutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: phy: dp83tg720: switch to adaptive polling and remove random delaysDavid Jander
Now that the PHY reset logic includes a role-specific asymmetric delay to avoid synchronized reset deadlocks, the previously used randomized polling intervals are no longer necessary. This patch removes the get_random_u32_below()-based logic and introduces an adaptive polling strategy: - Fast polling for a short time after link-down - Slow polling if the link remains down - Slower polling when the link is up This balances CPU usage and responsiveness while avoiding reset collisions. Additionally, the driver still relies on polling for all link state changes, as interrupt support is not implemented, and link-up events are not reliably signaled by the PHY. The polling parameters are now documented in the updated top-of-file comment. Co-developed-by: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: David Jander <david@protonic.nl> Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20250612104157.2262058-4-o.rempel@pengutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: phy: dp83tg720: remove redundant 600ms post-reset delayDavid Jander
Now that dp83tg720_soft_reset() introduces role-specific delays to avoid reset synchronization deadlocks, the fixed 600ms post-reset delay in dp83tg720_read_status() is no longer needed. The new logic provides both the required MDC timing and link stabilization, making the old empirical delay redundant and unnecessarily long. Co-developed-by: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: David Jander <david@protonic.nl> Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20250612104157.2262058-3-o.rempel@pengutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: phy: dp83tg720: implement soft reset with asymmetric delayDavid Jander
Add a .soft_reset callback for the DP83TG720 PHY that issues a hardware reset followed by an asymmetric post-reset delay. The delay differs based on the PHY's master/slave role to avoid synchronized reset deadlocks, which are known to occur when both link partners use identical reset intervals. The delay includes: - a fixed 1ms wait to satisfy MDC access timing per datasheet, and - an empirically chosen extra delay (97ms for master, 149ms for slave). Co-developed-by: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: David Jander <david@protonic.nl> Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20250612104157.2262058-2-o.rempel@pengutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: arp: use kfree_skb_reason() in arp_rcv()Qiu Yutan
Replace kfree_skb() with kfree_skb_reason() in arp_rcv(). Signed-off-by: Qiu Yutan <qiu.yutan@zte.com.cn> Signed-off-by: Jiang Kun <jiang.kun2@zte.com.cn> Link: https://patch.msgid.link/20250612110259698Q2KNNOPQhnIApRskKN3Hi@zte.com.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13Merge branch 'net-phy-improve-mdio-boardinfo-handling'Jakub Kicinski
Heiner Kallweit says: ==================== net: phy: improve mdio-boardinfo handling This series includes smaller improvements to mdio-boardinfo handling. ==================== Link: https://patch.msgid.link/6ae7bda0-c093-468a-8ac0-50a2afa73c45@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: phy: directly copy struct mdio_board_info in mdiobus_register_board_infoHeiner Kallweit
Using a direct assignment instead of memcpy reduces the text segment size from 0x273 bytes to 0x19b bytes in my case. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/af371f2a-42f3-4d94-80b9-3420380a3f6f@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: phy: improve mdio-boardinfo.hHeiner Kallweit
There's no need to include phy.h and mutex.h in mdio-boardinfo.h. However mdio-boardinfo.c included phy.h indirectly this way so far, include it explicitly instead. Whilst at it, sort the included headers properly. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/86b7a1d6-9f9c-4d22-b3d8-5abdef0bb39a@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: phy: move definition of struct mdio_board_entry to mdio-boardinfo.cHeiner Kallweit
Struct mdio_board_entry isn't used outside mdio-boardinfo.c, so remove the definition from the header file. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/0afe52d0-6fe6-434a-9881-3979661ff7b0@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13net: phy: simplify mdiobus_setup_mdiodev_from_board_infoHeiner Kallweit
- Move declaration of variable bi into list_for_each_entry_safe() - The return value of cb() effectively isn't used, this allows to simplify the code. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/f6bbe242-b43d-4c2b-8c51-2cb2cefbaf59@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-06-13Merge branch 'ionic-cleanups' into mainDavid S. Miller
Shannon Nelson says: ==================== ionic: three little changes These are three little changes for the code from inspection and testing. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2025-06-13ionic: cancel delayed work earlier in removeShannon Nelson
Cancel any entries on the delayed work queue before starting to tear down the lif to be sure there is no race with any other events. Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Joe Damato <joe@dama.to> Signed-off-by: David S. Miller <davem@davemloft.net>