summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2022-12-08net/mlx5: Refactor and expand rep vport stat groupOr Har-Toov
Expand representor vport stat group to support all counters from the vport stat group, to count all the traffic passing through the vport. Fix current implementation where fill_stats and update_stats use different structs. Signed-off-by: Or Har-Toov <ohartoov@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5e: multipath, support routes with more than 2 nexthopsMaor Dickman
Today multipath offload is only supported when the number of nexthops is 2 which block the use of it in case of system with 2 NICs. This patch solve it by enabling multipath offload per NIC if 2 nexthops of the route are its uplinks. Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5e: TC, add support for meter mtu offloadOz Shlomo
Initialize the meter object with the TC police mtu parameter. Use the hardware range destination to compare the pkt len to the mtu setting. Assign the range destination hit/miss ft to the police conform/exceed attributes. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5e: meter, add mtu post meter tablesOz Shlomo
TC police action may configure the maximum packet size to be handled by the policer, in addition to byte/packet rate. MTU check is realized in hardware using the range destination, specifying a hit ft, if packet len is in the range, or miss ft otherwise. Instantiate mtu green/red flow tables with a single match-all rule. Add the green/red actions to the hit/miss table accordingly. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5e: meter, refactor to allow multiple post meter tablesOz Shlomo
TC police action may configure the maximum packet size to be handled by the policer, in addition to byte/packet rate. Currently the post meter table steers the packet according to the meter aso output. Refactor the code to allow both metering and range post actions as a pre-step for adding police mtu offload support. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Add support for range match actionYevgeny Kliteynik
Add support for matching on range. The supported type of range is L2 frame size. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Add function that tells if STE miss addr has been initializedYevgeny Kliteynik
Up until now miss address in all the STEs was used to connect miss lists and to link the last STE in the list to end anchor. Match range STE will require special handling because its miss address is part of the 'action'. That is, range action has hit and miss addresses. Since the range action is always the last action, need to make sure that its miss address isn't overwritten by the end anchor. Adding new function mlx5dr_ste_is_miss_addr_set() to answer the question whether the STE's miss address has already been set as part of STE initialization. Use a callback that always returns false right now. Once match range is added, a different callback will be used for that STE type. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Some refactoring of miss address handlingYevgeny Kliteynik
In preparation for MATCH RANGE STE support, create a function to set the miss address of an STE. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Erez Shitrit <erezsh@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Manage definers with refcountsYevgeny Kliteynik
In many cases different actions will ask for the same definer format. Instead of allocating new definer general object and running out of definers, have an xarray of allocated definers and keep track of their usage with refcounts: allocate a new definer only when there isn't one with the same format already created, and destroy definer only when its refcount runs down to zero. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Handle FT action in a separate functionYevgeny Kliteynik
As preparation for range action support, moving the handling of final ICM address for flow table action to a separate function. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Rework is_fw_table functionYevgeny Kliteynik
This patch handles the following two changes w.r.t. is_fw_table function: 1. When SW steering is asked to create/destroy FW table, we allow for creation/destruction of only termination tables. Rename mlx5_dr_is_fw_table both to comply with the static function naming and to reflect that we're actually checking for FW termination table. 2. When the action 'go to flow table' is created, the destination flow table can be any FW table, not only termination table. Adding function to check if the dest table is FW table. This function will also be used by the later creation of range match action, so putting it the header file. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: DR, Add functions to create/destroy MATCH_DEFINER general objectYevgeny Kliteynik
SW steering is able to match only on the exact values of the packet fields, as requested by the user: the user provides mask for the fields that are of interest, and the exact values to be matched on when the traffic is handled. Match Definer is a general FW object that defines which fields in the packet will be referenced by the mask and tag of each STE. Match definer ID is part of STE fields, and it defines how the HW needs to interpret the STE's mask/tag values. Till now SW steering used the definers that were managed by FW and implemented the STE layout as described by the HW spec. Now that we're adding a new type of STE, SW steering needs to define for the HW how it should interpret this new STE's layout. This is done with a programmable match definer. The programmable definer allows to selects which fields will be included in the definer, and their layout: it has up to 9 DW selectors 8 Byte selectors. Each selector indicates a DW/Byte worth of fields out of the table that is defined by HW spec by referencing the offset of the required DW/Byte. This patch adds dr_cmd function to create and destroy MATCH_DEFINER general object. Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08net/mlx5: fs, add match on ranges APIYevgeny Kliteynik
Range is a new flow destination type which allows matching on a range of values instead of matching on a specific value. Range flow destination has the following fields: - hit_ft: flow table to forward the traffic in case of hit - miss_ft: flow table to forward the traffic in case of miss - field: which packet characteristic to match on - min: minimal value for the selected field - max: maximal value for the selected field Note: - In order to match, the value in the packet should meet the following criteria: min <= value < max - Currently, the only supported field type is L2 packet length Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
2022-12-08Merge patch series "RISC-V interrupt controller select cleanup"Palmer Dabbelt
Conor Dooley <conor@kernel.org> says: From: Conor Dooley <conor.dooley@microchip.com> Submitted a patch yesterday defaulting the SiFive PLIC driver to enabled [0], and in the ensuing conversation Marc suggested just doing a select at the arch level and dropping the user selectability completely. * b4-shazam-merge: RISC-V: stop selecting SIFIVE_PLIC at the SoC level irqchip/riscv-intc: remove user selectability of RISCV_INTC irqchip/sifive-plic: remove user selectability of SIFIVE_PLIC Link: https://lore.kernel.org/r/20221118104300.85016-1-conor@kernel.org Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/all/87zgceszp8.wl-maz@kernel.org/ Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-12-08irqchip/riscv-intc: remove user selectability of RISCV_INTCConor Dooley
Since commit e71ee06e3ca3 ("RISC-V: Force select RISCV_INTC for CONFIG_RISCV") the driver has been enabled at the arch level - and is mandatory anyway. There's no point exposing this as a choice to users, so stop bothering. Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221118104300.85016-3-conor@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-12-08irqchip/sifive-plic: remove user selectability of SIFIVE_PLICConor Dooley
The SiFive PLIC driver is used by all current implementations, including those that do not have a SiFive PLIC. The current driver supports more than just SiFive PLICs at present and, where possible, future PLIC implementations will also use this driver. As every supported RISC-V SoC selects the driver directly in Kconfig.socs there's no point in exposing this kconfig option to users. The Kconfig help text, in its current form, is misleading. There's no point doing anything about that though, as it will no longer be user selectable. Remove it. Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221118104300.85016-2-conor@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-12-08Merge tag 'block-6.1-2022-12-08' of git://git.kernel.dk/linuxLinus Torvalds
Pull block fix from Jens Axboe: "A small fix for initializing the NVMe quirks before initializing the subsystem" * tag 'block-6.1-2022-12-08' of git://git.kernel.dk/linux: nvme initialize core quirks before calling nvme_init_subsystem
2022-12-08Merge tag 'net-6.1-rc9' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from bluetooth, can and netfilter. Current release - new code bugs: - bonding: ipv6: correct address used in Neighbour Advertisement parsing (src vs dst typo) - fec: properly scope IRQ coalesce setup during link up to supported chips only Previous releases - regressions: - Bluetooth fixes for fake CSR clones (knockoffs): - re-add ERR_DATA_REPORTING quirk - fix crash when device is replugged - Bluetooth: - silence a user-triggerable dmesg error message - L2CAP: fix u8 overflow, oob access - correct vendor codec definition - fix support for Read Local Supported Codecs V2 - ti: am65-cpsw: fix RGMII configuration at SPEED_10 - mana: fix race on per-CQ variable NAPI work_done Previous releases - always broken: - af_unix: diag: fetch user_ns from in_skb in unix_diag_get_exact(), avoid null-deref - af_can: fix NULL pointer dereference in can_rcv_filter - can: slcan: fix UAF with a freed work - can: can327: flush TX_work on ldisc .close() - macsec: add missing attribute validation for offload - ipv6: avoid use-after-free in ip6_fragment() - nft_set_pipapo: actually validate intervals in fields after the first one - mvneta: prevent oob access in mvneta_config_rss() - ipv4: fix incorrect route flushing when table ID 0 is used, or when source address is deleted - phy: mxl-gpy: add workaround for IRQ bug on GPY215B and GPY215C" * tag 'net-6.1-rc9' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (77 commits) net: dsa: sja1105: avoid out of bounds access in sja1105_init_l2_policing() s390/qeth: fix use-after-free in hsci macsec: add missing attribute validation for offload net: mvneta: Fix an out of bounds check net: thunderbolt: fix memory leak in tbnet_open() ipv6: avoid use-after-free in ip6_fragment() net: plip: don't call kfree_skb/dev_kfree_skb() under spin_lock_irq() net: phy: mxl-gpy: add MDINT workaround net: dsa: mv88e6xxx: accept phy-mode = "internal" for internal PHY ports xen/netback: don't call kfree_skb() under spin_lock_irqsave() dpaa2-switch: Fix memory leak in dpaa2_switch_acl_entry_add() and dpaa2_switch_acl_entry_remove() ethernet: aeroflex: fix potential skb leak in greth_init_rings() tipc: call tipc_lxc_xmit without holding node_read_lock can: esd_usb: Allow REC and TEC to return to zero can: can327: flush TX_work on ldisc .close() can: slcan: fix freed work crash can: af_can: fix NULL pointer dereference in can_rcv_filter net: dsa: sja1105: fix memory leak in sja1105_setup_devlink_regions() ipv4: Fix incorrect route flushing when table ID 0 is used ipv4: Fix incorrect route flushing when source address is deleted ...
2022-12-08net/mlx4: small optimization in mlx4_en_xmit()Eric Dumazet
Test against MLX4_MAX_DESC_TXBBS only matters if the TX bounce buffer is going to be used. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Wei Wang <weiwan@google.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx4: MLX4_TX_BOUNCE_BUFFER_SIZE depends on MAX_SKB_FRAGSEric Dumazet
Google production kernel has increased MAX_SKB_FRAGS to 45 for BIG-TCP rollout. Unfortunately mlx4 TX bounce buffer is not big enough whenever an skb has up to 45 page fragments. This can happen often with TCP TX zero copy, as one frag usually holds 4096 bytes of payload (order-0 page). Tested: Kernel built with MAX_SKB_FRAGS=45 ip link set dev eth0 gso_max_size 185000 netperf -t TCP_SENDFILE I made sure that "ethtool -G eth0 tx 64" was properly working, ring->full_size being set to 15. Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Wei Wang <weiwan@google.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx4: rename two constantsEric Dumazet
MAX_DESC_SIZE is really the size of the bounce buffer used when reaching the right side of TX ring buffer. MAX_DESC_TXBBS get a MLX4_ prefix. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08ice: reschedule ice_ptp_wait_for_offset_valid during resetJacob Keller
If the ice_ptp_wait_for_offest_valid function is scheduled to run while the driver is resetting, it will exit without completing calibration. The work function gets scheduled by ice_ptp_port_phy_restart which will be called as part of the reset recovery process. It is possible for the first execution to occur before the driver has completely cleared its resetting flags. Ensure calibration completes by rescheduling the task until reset is fully completed. Reported-by: Siddaraju DH <siddaraju.dh@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: make Tx and Rx vernier offset calibration independentSiddaraju DH
The Tx and Rx calibration and timestamp generation blocks are independent. However, the ice driver waits until both blocks are ready before configuring either block. This can result in delay of configuring one block because we have not yet received a packet in the other block. There is no reason to wait to finish programming Tx just because we haven't received a packet. Similarly there is no reason to wait to program Rx just because we haven't transmitted a packet. Instead of checking both offset status before programming either block, refactor the ice_phy_cfg_tx_offset_e822 and ice_phy_cfg_rx_offset_e822 functions so that they perform their own offset status checks. Additionally, make them also check the offset ready bit to determine if the offset values have already been programmed. Call the individual configure functions directly in ice_ptp_wait_for_offset_valid. The functions will now correctly check status, and program the offsets if ready. Once the offset is programmed, the functions will exit quickly after just checking the offset ready register. Remove the ice_phy_calc_vernier_e822 in ice_ptp_hw.c, as well as the offset valid check functions in ice_ptp.c entirely as they are no longer necessary. With this change, the Tx and Rx blocks will each be enabled as soon as possible without waiting for the other block to complete calibration. This can enable timestamps faster in setups which have a low rate of transmitted or received packets. In particular, it can stop a situation where one port never receives traffic, and thus never finishes calibration of the Tx block, resulting in continuous faults reported by the ptp4l daemon application. Signed-off-by: Siddaraju DH <siddaraju.dh@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: only check set bits in ice_ptp_flush_tx_trackerJacob Keller
The ice_ptp_flush_tx_tracker function is called to clear all outstanding Tx timestamp requests when the port is being brought down. This function iterates over the entire list, but this is unnecessary. We only need to check the bits which are actually set in the ready bitmap. Replace this logic with for_each_set_bit, and follow a similar flow as in ice_ptp_tx_tstamp_cleanup. Note that it is safe to call dev_kfree_skb_any on a NULL pointer as it will perform a no-op so we do not need to verify that the skb is actually NULL. The new implementation also avoids clearing (and thus reading!) the PHY timestamp unless the index is marked as having a valid timestamp in the timestamp status bitmap. This ensures that we properly clear the status registers as appropriate. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: handle flushing stale Tx timestamps in ice_ptp_tx_tstampJacob Keller
In the event of a PTP clock time change due to .adjtime or .settime, the ice driver needs to update the cached copy of the PHC time and also discard any outstanding Tx timestamps. This is required because otherwise the wrong copy of the PHC time will be used when extending the Tx timestamp. This could result in reporting incorrect timestamps to the stack. The current approach taken to handle this is to call ice_ptp_flush_tx_tracker, which will discard any timestamps which are not yet complete. This is problematic for two reasons: 1) it could lead to a potential race condition where the wrong timestamp is associated with a future packet. This can occur with the following flow: 1. Thread A gets request to transmit a timestamped packet, and picks an index and transmits the packet 2. Thread B calls ice_ptp_flush_tx_tracker and sees the index in use, marking is as disarded. No timestamp read occurs because the status bit is not set, but the index is released for re-use 3. Thread A gets a new request to transmit another timestamped packet, picks the same (now unused) index and transmits that packet. 4. The PHY transmits the first packet and updates the timestamp slot and generates an interrupt. 5. The ice_ptp_tx_tstamp thread executes and sees the interrupt and a valid timestamp but associates it with the new Tx SKB and not the one that actual timestamp for the packet as expected. This could result in the previous timestamp being assigned to a new packet producing incorrect timestamps and leading to incorrect behavior in PTP applications. This is most likely to occur when the packet rate for Tx timestamp requests is very high. 2) on E822 hardware, we must avoid reading a timestamp index more than once each time its status bit is set and an interrupt is generated by hardware. We do have some extensive checks for the unread flag to ensure that only one of either the ice_ptp_flush_tx_tracker or ice_ptp_tx_tstamp threads read the timestamp. However, even with this we can still have cases where we "flush" a timestamp that was actually completed in hardware. This can lead to cases where we don't read the timestamp index as appropriate. To fix both of these issues, we must avoid calling ice_ptp_flush_tx_tracker outside of the teardown path. Rather than using ice_ptp_flush_tx_tracker, introduce a new state bitmap, the stale bitmap. Start this as cleared when we begin a new timestamp request. When we're about to extend a timestamp and send it up to the stack, first check to see if that stale bit was set. If so, drop the timestamp without sending it to the stack. When we need to update the cached PHC timestamp out of band, just mark all currently outstanding timestamps as stale. This will ensure that once hardware completes the timestamp we'll ignore it correctly and avoid reporting bogus timestamps to userspace. With this change, we fix potential issues caused by calling ice_ptp_flush_tx_tracker during normal operation. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: cleanup allocations in ice_ptp_alloc_tx_trackerJacob Keller
The ice_ptp_alloc_tx_tracker function must allocate the timestamp array and the bitmap for tracking the currently in use indexes. A future change is going to add yet another allocation to this function. If these allocations fail we need to ensure that we properly cleanup and ensure that the pointers in the ice_ptp_tx structure are NULL. Simplify this logic by allocating to local variables first. If any allocation fails, then free everything and exit. Only update the ice_ptp_tx structure if all allocations succeed. This ensures that we have no side effects on the Tx structure unless all allocations have succeeded. Thus, no code will see an invalid pointer and we don't need to re-assign NULL on cleanup. This is safe because kernel "free" functions are designed to be NULL safe and perform no action if passed a NULL pointer. Thus its safe to simply always call kfree or bitmap_free even if one of those pointers was NULL. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: protect init and calibrating check in ice_ptp_request_tsJacob Keller
When requesting a new timestamp, the ice_ptp_request_ts function does not hold the Tx tracker lock while checking init and calibrating. This means that we might issue a new timestamp request just after the Tx timestamp tracker starts being deinitialized. This could lead to incorrect access of the timestamp structures. Correct this by moving the init and calibrating checks under the lock, and updating the flows which modify these fields to use the lock. Note that we do not need to hold the lock while checking for tx->init in ice_ptp_tx_tstamp. This is because the teardown function will use synchronize_irq after clearing the flag to ensure that the threaded interrupt completes. Either a) the tx->init flag will be cleared before the ice_ptp_tx_tstamp function starts, thus it will exit immediately, or b) the threaded interrupt will be executing and the synchronize_irq will wait until the threaded interrupt has completed at which point we know the init field has definitely been set and new interrupts will not execute the Tx timestamp thread function. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08net/mlx5e: TC, allow meter jump control actionOz Shlomo
Separate the matchall police action validation from flower validation. Isolate the action validation logic in the police action parser. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-12-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, init post meter rules with branching attributesOz Shlomo
Instantiate the post meter actions with the platform initialized branching action attributes. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-11-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, rename post_meter actionsOz Shlomo
Currently post meter supports only the pipe/drop conform-exceed policy. This assumption is reflected in several variable names. Rename the following variables as a pre-step for using the generalized branching action platform. Rename fwd_green_rule/drop_red_rule to green_rule/red_rule respectively. Repurpose red_counter/green_counter to act_counter/drop_counter to allow police conform-exceed configurations that do not drop. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-10-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, initialize branching action with target attrOz Shlomo
Identify the jump target action when iterating the action list. Initialize the jump target attr with the jumping attribute during the parsing phase. Initialize the jumping attr post action with the target during the offload phase. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-9-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, initialize branch flow attributesOz Shlomo
Initialize flow attribute for drop, accept, pipe and jump branching actions. Instantiate a flow attribute instance according to the specified branch control action. Store the branching attributes on the branching action flow attribute during the parsing phase. Then, during the offload phase, allocate the relevant mod header objects to the branching actions. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-8-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, set control params for branching actionsOz Shlomo
Extend the act tc api to set the branch control params aligning with the police conform/exceed use case. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-7-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, validate action list per attributeOz Shlomo
Currently the entire flow action list is validate for offload limitations. For example, flow with both forward and drop actions are declared invalid due to hardware restrictions. However, a multi-table hardware model changes the limitations from a flow scope to a single flow attribute scope. Apply offload limitations to flow attributes instead of the entire flow. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-6-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, add terminating actionsOz Shlomo
Extend act api to identify actions that terminate action list. Pre-step for terminating branching actions. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-5-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: TC, reuse flow attribute post parser processingOz Shlomo
After the tc action parsing phase the flow attribute is initialized with relevant eswitch offload objects such as tunnel, vlan, header modify and counter attributes. The post processing is done both for fdb and post-action attributes. Reuse the flow attribute post parsing logic by both fdb and post-action offloads. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-4-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5: fs, assert null dest pointer when dest_num is 0Oz Shlomo
Currently create_flow_handle() assumes a null dest pointer when there are no destinations. This might not be the case as the caller may pass an allocated dest array while setting the dest_num parameter to 0. Assert null dest array for flow rules that have no destinations (e.g. drop rule). Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-3-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08net/mlx5e: E-Switch, handle flow attribute with no destinationsOz Shlomo
Rules with drop action are not required to have a destination. Currently the destination list is allocated with the maximum number of destinations and passed to the fs_core layer along with the actual number of destinations. Remove redundant passing of dest pointer when count of dest is 0. Signed-off-by: Oz Shlomo <ozsh@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20221203221337.29267-2-saeed@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-08cxl/region: Fix memdev reuse checkFan Ni
Due to a typo, the check of whether or not a memdev has already been used as a target for the region (above code piece) will always be skipped. Given a memdev with more than one HDM decoder, an interleaved region can be created that maps multiple HPAs to the same DPA. According to CXL spec 3.0 8.1.3.8.4, "Aliasing (mapping more than one Host Physical Address (HPA) to a single Device Physical Address) is forbidden." Fix this by using existing iterator for memdev reuse check. Cc: <stable@vger.kernel.org> Fixes: 384e624bb211 ("cxl/region: Attach endpoint decoders") Signed-off-by: Fan Ni <fan.ni@samsung.com> Link: https://lore.kernel.org/r/20221107212153.745993-1-fan.ni@samsung.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-12-08Merge tag 'for-linus-2022120801' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid Pull HID fixes from Jiri Kosina: "A regression fix for handling Logitech HID++ devices and memory corruption fixes: - regression fix (revert) for catch-all handling of Logitech HID++ Bluetooth devices; there are devices that turn out not to work with this, and the root cause is yet to be properly understood. So we are dropping it for now, and it will be revisited for 6.2 or 6.3 (Benjamin Tissoires) - memory corruption fix in HID core (ZhangPeng) - memory corruption fix in hid-lg4ff (Anastasia Belova) - Kconfig fix for I2C_HID (Benjamin Tissoires) - a few device-id specific quirks that piggy-back on top of the important fixes above" * tag 'for-linus-2022120801' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid: Revert "HID: logitech-hidpp: Enable HID++ for all the Logitech Bluetooth devices" Revert "HID: logitech-hidpp: Remove special-casing of Bluetooth devices" HID: usbhid: Add ALWAYS_POLL quirk for some mice HID: core: fix shift-out-of-bounds in hid_report_raw_event HID: uclogic: Add HID_QUIRK_HIDINPUT_FORCE quirk HID: fix I2C_HID not selected when I2C_HID_OF_ELAN is HID: hid-lg4ff: Add check for empty lbuf HID: ite: Enable QUIRK_TOUCHPAD_ON_OFF_REPORT on Acer Aspire Switch V 10 HID: uclogic: Fix frame templates for big endian architectures
2022-12-08Revert "HID: logitech-hidpp: Enable HID++ for all the Logitech Bluetooth ↵Benjamin Tissoires
devices" This reverts commit 532223c8ac57605a10e46dc0ab23dcf01c9acb43. As reported in [0], hid-logitech-hidpp now binds on all bluetooth mice, but there are corner cases where hid-logitech-hidpp just gives up on the mouse. This leads the end user with a dead mouse. Given that we are at -rc8, we are definitively too late to find a proper fix. We already identified 2 issues less than 24 hours after the bug report. One in that ->match() was never designed to be used anywhere else than in hid-generic, and the other that hid-logitech-hidpp has corner cases where it gives up on devices it is not supposed to. So we have no choice but postpone this patch to the next kernel release. [0] https://lore.kernel.org/linux-input/CAJZ5v0g-_o4AqMgNwihCb0jrwrcJZfRrX=jv8aH54WNKO7QB8A@mail.gmail.com/ Reported-by: Rafael J . Wysocki <rjw@rjwysocki.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2022-12-08Revert "HID: logitech-hidpp: Remove special-casing of Bluetooth devices"Benjamin Tissoires
This reverts commit 8544c812e43ab7bdf40458411b83987b8cba924d. We need to revert commit 532223c8ac57 ("HID: logitech-hidpp: Enable HID++ for all the Logitech Bluetooth devices") because that commit might make hid-logitech-hidpp bind on mice that are not well enough supported by hid-logitech-hidpp, and the end result is that the probe of those mice is now returning -ENODEV, leaving the end user with a dead mouse. Given that commit 8544c812e43a ("HID: logitech-hidpp: Remove special-casing of Bluetooth devices") is a direct dependency of 532223c8ac57, revert it too. Note that this also adapt according to commit 908d325e1665 ("HID: logitech-hidpp: Detect hi-res scrolling support") to re-add support of the devices that were removed from that commit too. I have locally an MX Master and I tested this device with that revert, ensuring we still have high-res scrolling. Reported-by: Rafael J . Wysocki <rjw@rjwysocki.net> Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2022-12-08ice: synchronize the misc IRQ when tearing down Tx trackerJacob Keller
Since commit 1229b33973c7 ("ice: Add low latency Tx timestamp read") the ice driver has used a threaded IRQ for handling Tx timestamps. This change did not add a call to synchronize_irq during ice_ptp_release_tx_tracker. Thus it is possible that an interrupt could occur just as the tracker is being removed. This could lead to a use-after-free of the Tx tracker structure data. Fix this by calling sychronize_irq in ice_ptp_release_tx_tracker after we've cleared the init flag. In addition, make sure that we re-check the init flag at the end of ice_ptp_tx_tstamp before we exit ensuring that we will stop polling for new timestamps once the tracker de-initialization has begun. Refactor the ts_handled variable into "more_timestamps" so that we can simply directly assign this boolean instead of relying on an initialized value of true. This makes the new combined check easier to read. With this change, the ice_ptp_release_tx_tracker function will now wait for the threaded interrupt to complete if it was executing while the init flag was cleared. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: check Tx timestamp memory register for ready timestampsJacob Keller
The PHY for E822 based hardware has a register which indicates which timestamps are valid in the PHY timestamp memory block. Each bit in the register indicates whether the associated index in the timestamp memory is valid. Hardware sets this bit when the timestamp is captured, and clears the bit when the timestamp is read. Use of this register is important as reading timestamp registers can impact the way that hardware generates timestamp interrupts. This occurs because the PHY has an internal value which is incremented when hardware captures a timestamp and decremented when software reads a timestamp. Reading timestamps which are not marked as valid still decrement the internal value and can result in the Tx timestamp interrupt not triggering in the future. To prevent this, use the timestamp memory value to determine which timestamps are ready to be read. The ice_get_phy_tx_tstamp_ready function reads this value. For E810 devices, this just always returns with all bits set. Skip any timestamp which is not set in this bitmap, avoiding reading extra timestamps on E822 devices. The stale check against a cached timestamp value is no longer necessary for PHYs which support the timestamp ready bitmap properly. E810 devices still need this. Introduce a new verify_cached flag to the ice_ptp_tx structure. Use this to determine if we need to perform the verification against the cached timestamp value. Set this to 1 for the E810 Tx tracker init function. Notice that many of the fields in ice_ptp_tx are simple 1 bit flags. Save some structure space by using bitfields of length 1 for these values. Modify the ICE_PTP_TS_VALID check to simply drop the timestamp immediately so that in an event of getting such an invalid timestamp the driver does not attempt to re-read the timestamp again in a future poll of the register. With these changes, the driver now reads each timestamp register exactly once, and does not attempt any re-reads. This ensures the interrupt tracking logic in the PHY will not get stuck. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: handle discarding old Tx requests in ice_ptp_tx_tstampJacob Keller
Currently the driver uses the PTP kthread to process handling and discarding of stale Tx timestamp requests. The function ice_ptp_tx_tstamp_cleanup is used for this. A separate thread creates complications for the driver as we now have both the main Tx timestamp processing IRQ checking timestamps as well as the kthread. Rather than using the kthread to handle this, simply check for stale timestamps within the ice_ptp_tx_tstamp function. This function must already process the timestamps anyways. If a Tx timestamp has been waiting for 2 seconds we simply clear the bit and discard the SKB. This avoids the complication of having separate threads polling, reducing overall CPU work. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: always call ice_ptp_link_change and make it voidJacob Keller
The ice_ptp_link_change function is currently only called for E822 based hardware. Future changes are going to extend this function to perform additional tasks on link change. Always call this function, moving the E810 check from the callers down to just before we call the E822-specific function required to restart the PHY. This function also returns an error value, but none of the callers actually check it. In general, the errors it produces are more likely systemic problems such as invalid or corrupt port numbers. No caller checks these, and so no warning is logged. Re-order the flag checks so that ICE_FLAG_PTP is checked first. Drop the unnecessary check for ICE_FLAG_PTP_SUPPORTED, as ICE_FLAG_PTP will not be set except when ICE_FLAG_PTP_SUPPORTED is set. Convert the port checks to WARN_ON_ONCE, in order to generate a kernel stack trace when they are hit. Convert the function to void since no caller actually checks these return values. Co-developed-by: Dave Ertman <david.m.ertman@intel.com> Signed-off-by: Dave Ertman <david.m.ertman@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: fix misuse of "link err" with "link status"Jacob Keller
The ice_ptp_link_change function has a comment which mentions "link err" when referring to the current link status. We are storing the status of whether link is up or down, which is not an error. It is appears that this use of err accidentally got included due to an overzealous search and replace when removing the ice_status enum and local status variable. Fix the wording to use the correct term. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: Reset TS memory for all quadsKarol Kolacinski
In E822 products, the owner PF should reset memory for all quads, not only for the one where assigned lport is. Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: Remove the E822 vernier "bypass" logicMilena Olech
The E822 devices support an extended "vernier" calibration which enables higher precision timestamps by accounting for delays in the PHY, and compensating for them. These delays are measured by hardware as part of its vernier calibration logic. The driver currently starts the PHY in "bypass" mode which skips the compensation. Then it later attempts to switch from bypass to vernier. This unfortunately does not work as expected. Instead of properly compensating for the delays, the hardware continues operating in bypass without the improved precision expected. Because we cannot dynamically switch between bypass and vernier mode, refactor the driver to always operate in vernier mode. This has a slight downside: Tx timestamp and Rx timestamp requests that occur as the very first packet set after link up will not complete properly and may be reported to applications as missing timestamps. This occurs frequently in test environments where traffic is light or targeted specifically at testing PTP. However, in practice most environments will have transmitted or received some data over the network before such initial requests are made. Signed-off-by: Milena Olech <milena.olech@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-08ice: Use more generic names for ice_ptp_tx fieldsSergey Temerkhanov
Some supported devices have per-port timestamp memory blocks while others have shared ones within quads. Rename the struct ice_ptp_tx fields to reflect the block entities it works with Signed-off-by: Sergey Temerkhanov <sergey.temerkhanov@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>