diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-11-21 08:28:08 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-11-21 08:28:08 -0800 |
commit | fcc79e1714e8c2b8e216dc3149812edd37884eef (patch) | |
tree | 17a51d29db810b81412be040aaf380936b3261b4 /drivers/net/ethernet/marvell/octeontx2 | |
parent | 6e95ef0258ff4ee23ae3b06bf6b00b33dbbd5ef7 (diff) | |
parent | dd7207838d38780b51e4690ee508ab2d5057e099 (diff) |
Merge tag 'net-next-6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Paolo Abeni:
"The most significant set of changes is the per netns RTNL. The new
behavior is disabled by default, regression risk should be contained.
Notably the new config knob PTP_1588_CLOCK_VMCLOCK will inherit its
default value from PTP_1588_CLOCK_KVM, as the first is intended to be
a more reliable replacement for the latter.
Core:
- Started a very large, in-progress, effort to make the RTNL lock
scope per network-namespace, thus reducing the lock contention
significantly in the containerized use-case, comprising:
- RCU-ified some relevant slices of the FIB control path
- introduce basic per netns locking helpers
- namespacified the IPv4 address hash table
- remove rtnl_register{,_module}() in favour of
rtnl_register_many()
- refactor rtnl_{new,del,set}link() moving as much validation as
possible out of RTNL lock
- convert all phonet doit() and dumpit() handlers to RCU
- convert IPv4 addresses manipulation to per-netns RTNL
- convert virtual interface creation to per-netns RTNL
the per-netns lock infrastructure is guarded by the
CONFIG_DEBUG_NET_SMALL_RTNL knob, disabled by default ad interim.
- Introduce NAPI suspension, to efficiently switching between busy
polling (NAPI processing suspended) and normal processing.
- Migrate the IPv4 routing input, output and control path from direct
ToS usage to DSCP macros. This is a work in progress to make ECN
handling consistent and reliable.
- Add drop reasons support to the IPv4 rotue input path, allowing
better introspection in case of packets drop.
- Make FIB seqnum lockless, dropping RTNL protection for read access.
- Make inet{,v6} addresses hashing less predicable.
- Allow providing timestamp OPT_ID via cmsg, to correlate TX packets
and timestamps
Things we sprinkled into general kernel code:
- Add small file operations for debugfs, to reduce the struct ops
size.
- Refactoring and optimization for the implementation of page_frag
API, This is a preparatory work to consolidate the page_frag
implementation.
Netfilter:
- Optimize set element transactions to reduce memory consumption
- Extended netlink error reporting for attribute parser failure.
- Make legacy xtables configs user selectable, giving users the
option to configure iptables without enabling any other config.
- Address a lot of false-positive RCU issues, pointed by recent CI
improvements.
BPF:
- Put xsk sockets on a struct diet and add various cleanups. Overall,
this helps to bump performance by 12% for some workloads.
- Extend BPF selftests to increase coverage of XDP features in
combination with BPF cpumap.
- Optimize and homogenize bpf_csum_diff helper for all archs and also
add a batch of new BPF selftests for it.
- Extend netkit with an option to delegate skb->{mark,priority}
scrubbing to its BPF program.
- Make the bpf_get_netns_cookie() helper available also to tc(x) BPF
programs.
Protocols:
- Introduces 4-tuple hash for connected udp sockets, speeding-up
significantly connected sockets lookup.
- Add a fastpath for some TCP timers that usually expires after
close, the socket lock contention.
- Add inbound and outbound xfrm state caches to speed up state
lookups.
- Avoid sending MPTCP advertisements on stale subflows, reducing
risks on loosing them.
- Make neighbours table flushing more scalable, maintaining per
device neigh lists.
Driver API:
- Introduce a unified interface to configure transmission H/W
shaping, and expose it to user-space via generic-netlink.
- Add support for per-NAPI config via netlink. This makes napi
configuration persistent across queues removal and re-creation.
Requires driver updates, currently supported drivers are:
nVidia/Mellanox mlx4 and mlx5, Broadcom brcm and Intel ice.
- Add ethtool support for writing SFP / PHY firmware blocks.
- Track RSS context allocation from ethtool core.
- Implement support for mirroring to DSA CPU port, via TC mirror
offload.
- Consolidate FDB updates notification, to avoid duplicates on
device-specific entries.
- Expose DPLL clock quality level to the user-space.
- Support master-slave PHY config via device tree.
Tests and tooling:
- forwarding: introduce deferred commands, to simplify the cleanup
phase
Drivers:
- Updated several drivers - Amazon vNic, Google vNic, Microsoft vNic,
Intel e1000e and Broadcom Tigon3 - to use netdev-genl to link the
IRQs and queues to NAPI IDs, allowing busy polling and better
introspection.
- Ethernet high-speed NICs:
- nVidia/Mellanox:
- mlx5:
- a large refactor to implement support for cross E-Switch
scheduling
- refactor H/W conter management to let it scale better
- H/W GRO cleanups
- Intel (100G, ice)::
- add support for ethtool reset
- implement support for per TX queue H/W shaping
- AMD/Solarflare:
- implement per device queue stats support
- Broadcom (bnxt):
- improve wildcard l4proto on IPv4/IPv6 ntuple rules
- Marvell Octeon:
- Add representor support for each Resource Virtualization Unit
(RVU) device.
- Hisilicon:
- add support for the BMC Gigabit Ethernet
- IBM (EMAC):
- driver cleanup and modernization
- Cisco (VIC):
- raise the queues number limit to 256
- Ethernet virtual:
- Google vNIC:
- implement page pool support
- macsec:
- inherit lower device's features and TSO limits when
offloading
- virtio_net:
- enable premapped mode by default
- support for XDP socket(AF_XDP) zerocopy TX
- wireguard:
- set the TSO max size to be GSO_MAX_SIZE, to aggregate larger
packets.
- Ethernet NICs embedded and virtual:
- Broadcom ASP:
- enable software timestamping
- Freescale:
- add enetc4 PF driver
- MediaTek: Airoha SoC:
- implement BQL support
- RealTek r8169:
- enable TSO by default on r8168/r8125
- implement extended ethtool stats
- Renesas AVB:
- enable TX checksum offload
- Synopsys (stmmac):
- support header splitting for vlan tagged packets
- move common code for DWMAC4 and DWXGMAC into a separate FPE
module.
- add dwmac driver support for T-HEAD TH1520 SoC
- Synopsys (xpcs):
- driver refactor and cleanup
- TI:
- icssg_prueth: add VLAN offload support
- Xilinx emaclite:
- add clock support
- Ethernet switches:
- Microchip:
- implement support for the lan969x Ethernet switch family
- add LAN9646 switch support to KSZ DSA driver
- Ethernet PHYs:
- Marvel: 88q2x: enable auto negotiation
- Microchip: add support for LAN865X Rev B1 and LAN867X Rev C1/C2
- PTP:
- Add support for the Amazon virtual clock device
- Add PtP driver for s390 clocks
- WiFi:
- mac80211
- EHT 1024 aggregation size for transmissions
- new operation to indicate that a new interface is to be added
- support radio separation of multi-band devices
- move wireless extension spy implementation to libiw
- Broadcom:
- brcmfmac: optional LPO clock support
- Microchip:
- add support for Atmel WILC3000
- Qualcomm (ath12k):
- firmware coredump collection support
- add debugfs support for a multitude of statistics
- Qualcomm (ath5k):
- Arcadyan ARV45XX AR2417 & Gigaset SX76[23] AR241[34]A support
- Realtek:
- rtw88: 8821au and 8812au USB adapters support
- rtw89: add thermal protection
- rtw89: fine tune BT-coexsitence to improve user experience
- rtw89: firmware secure boot for WiFi 6 chip
- Bluetooth
- add Qualcomm WCN785x support for ids Foxconn 0xe0fc/0xe0f3 and
0x13d3:0x3623
- add Realtek RTL8852BE support for id Foxconn 0xe123
- add MediaTek MT7920 support for wireless module ids
- btintel_pcie: add handshake between driver and firmware
- btintel_pcie: add recovery mechanism
- btnxpuart: add GPIO support to power save feature"
* tag 'net-next-6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1475 commits)
mm: page_frag: fix a compile error when kernel is not compiled
Documentation: tipc: fix formatting issue in tipc.rst
selftests: nic_performance: Add selftest for performance of NIC driver
selftests: nic_link_layer: Add selftest case for speed and duplex states
selftests: nic_link_layer: Add link layer selftest for NIC driver
bnxt_en: Add FW trace coredump segments to the coredump
bnxt_en: Add a new ethtool -W dump flag
bnxt_en: Add 2 parameters to bnxt_fill_coredump_seg_hdr()
bnxt_en: Add functions to copy host context memory
bnxt_en: Do not free FW log context memory
bnxt_en: Manage the FW trace context memory
bnxt_en: Allocate backing store memory for FW trace logs
bnxt_en: Add a 'force' parameter to bnxt_free_ctx_mem()
bnxt_en: Refactor bnxt_free_ctx_mem()
bnxt_en: Add mem_valid bit to struct bnxt_ctx_mem_type
bnxt_en: Update firmware interface spec to 1.10.3.85
selftests/bpf: Add some tests with sockmap SK_PASS
bpf: fix recursive lock when verdict program return SK_PASS
wireguard: device: support big tcp GSO
wireguard: selftests: load nf_conntrack if not present
...
Diffstat (limited to 'drivers/net/ethernet/marvell/octeontx2')
31 files changed, 2255 insertions, 322 deletions
diff --git a/drivers/net/ethernet/marvell/octeontx2/Kconfig b/drivers/net/ethernet/marvell/octeontx2/Kconfig index a32d85d6f599..35c4f5f64f58 100644 --- a/drivers/net/ethernet/marvell/octeontx2/Kconfig +++ b/drivers/net/ethernet/marvell/octeontx2/Kconfig @@ -46,3 +46,11 @@ config OCTEONTX2_VF depends on OCTEONTX2_PF help This driver supports Marvell's OcteonTX2 NIC virtual function. + +config RVU_ESWITCH + tristate "Marvell RVU E-Switch support" + depends on OCTEONTX2_PF + default m + help + This driver supports Marvell's RVU E-Switch that + provides internal SRIOV packet steering and switching. diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile index 3cf4c8285c90..ccea37847df8 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile @@ -11,4 +11,5 @@ rvu_mbox-y := mbox.o rvu_trace.o rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \ rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \ rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o rvu_switch.o \ - rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o + rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o \ + rvu_rep.o diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h index 2436c1ff9ba4..5d84386ed22d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/common.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h @@ -156,6 +156,7 @@ enum nix_scheduler { #define NIC_HW_MIN_FRS 40 #define NIC_HW_MAX_FRS 9212 #define SDP_HW_MAX_FRS 65535 +#define SDP_HW_MIN_FRS 16 #define CN10K_LMAC_LINK_MAX_FRS 16380 /* 16k - FCS */ #define CN10K_LBK_LINK_MAX_FRS 65535 /* 64k */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 6ea2f3071fe8..62c07407eb94 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -144,6 +144,9 @@ M(LMTST_TBL_SETUP, 0x00a, lmtst_tbl_setup, lmtst_tbl_setup_req, \ msg_rsp) \ M(SET_VF_PERM, 0x00b, set_vf_perm, set_vf_perm, msg_rsp) \ M(PTP_GET_CAP, 0x00c, ptp_get_cap, msg_req, ptp_get_cap_rsp) \ +M(GET_REP_CNT, 0x00d, get_rep_cnt, msg_req, get_rep_cnt_rsp) \ +M(ESW_CFG, 0x00e, esw_cfg, esw_cfg_req, msg_rsp) \ +M(REP_EVENT_NOTIFY, 0x00f, rep_event_notify, rep_event, msg_rsp) \ /* CGX mbox IDs (range 0x200 - 0x3FF) */ \ M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \ @@ -319,6 +322,7 @@ M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_re M(NIX_MCAST_GRP_UPDATE, 0x802d, nix_mcast_grp_update, \ nix_mcast_grp_update_req, \ nix_mcast_grp_update_rsp) \ +M(NIX_LF_STATS, 0x802e, nix_lf_stats, nix_stats_req, nix_stats_rsp) \ /* MCS mbox IDs (range 0xA000 - 0xBFFF) */ \ M(MCS_ALLOC_RESOURCES, 0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req, \ mcs_alloc_rsrc_rsp) \ @@ -380,12 +384,16 @@ M(CPT_INST_LMTST, 0xD00, cpt_inst_lmtst, cpt_inst_lmtst_req, msg_rsp) #define MBOX_UP_MCS_MESSAGES \ M(MCS_INTR_NOTIFY, 0xE00, mcs_intr_notify, mcs_intr_info, msg_rsp) +#define MBOX_UP_REP_MESSAGES \ +M(REP_EVENT_UP_NOTIFY, 0xEF0, rep_event_up_notify, rep_event, msg_rsp) \ + enum { #define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id, MBOX_MESSAGES MBOX_UP_CGX_MESSAGES MBOX_UP_CPT_MESSAGES MBOX_UP_MCS_MESSAGES +MBOX_UP_REP_MESSAGES #undef M }; @@ -1364,6 +1372,37 @@ struct nix_bandprof_get_hwinfo_rsp { u32 policer_timeunit; }; +struct nix_stats_req { + struct mbox_msghdr hdr; + u8 reset; + u16 pcifunc; + u64 rsvd; +}; + +struct nix_stats_rsp { + struct mbox_msghdr hdr; + u16 pcifunc; + struct { + u64 octs; + u64 ucast; + u64 bcast; + u64 mcast; + u64 drop; + u64 drop_octs; + u64 drop_mcast; + u64 drop_bcast; + u64 err; + u64 rsvd[5]; + } rx; + struct { + u64 ucast; + u64 bcast; + u64 mcast; + u64 drop; + u64 octs; + } tx; +}; + /* NPC mbox message structs */ #define NPC_MCAM_ENTRY_INVALID 0xFFFF @@ -1525,6 +1564,41 @@ struct ptp_get_cap_rsp { u64 cap; }; +struct get_rep_cnt_rsp { + struct mbox_msghdr hdr; + u16 rep_cnt; + u16 rep_pf_map[64]; + u64 rsvd; +}; + +struct esw_cfg_req { + struct mbox_msghdr hdr; + u8 ena; + u64 rsvd; +}; + +struct rep_evt_data { + u8 port_state; + u8 vf_state; + u16 rx_mode; + u16 rx_flags; + u16 mtu; + u8 mac[ETH_ALEN]; + u64 rsvd[5]; +}; + +struct rep_event { + struct mbox_msghdr hdr; + u16 pcifunc; +#define RVU_EVENT_PORT_STATE BIT_ULL(0) +#define RVU_EVENT_PFVF_STATE BIT_ULL(1) +#define RVU_EVENT_MTU_CHANGE BIT_ULL(2) +#define RVU_EVENT_RX_MODE_CHANGE BIT_ULL(3) +#define RVU_EVENT_MAC_ADDR_CHANGE BIT_ULL(4) + u16 event; + struct rep_evt_data evt_data; +}; + struct flow_msg { unsigned char dmac[6]; unsigned char smac[6]; @@ -1563,6 +1637,7 @@ struct flow_msg { u8 icmp_type; u8 icmp_code; __be16 tcp_flags; + u16 sq_id; }; struct npc_install_flow_req { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 5016ba82e142..b897845e25fd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -513,6 +513,11 @@ struct rvu_switch { u16 start_entry; }; +struct rep_evtq_ent { + struct list_head node; + struct rep_event event; +}; + struct rvu { void __iomem *afreg_base; void __iomem *pfreg_base; @@ -525,6 +530,7 @@ struct rvu { struct mutex alias_lock; /* Serialize bar2 alias access */ int vfs; /* Number of VFs attached to RVU */ u16 vf_devid; /* VF devices id */ + bool def_rule_cntr_en; int nix_blkaddr[MAX_NIX_BLKS]; /* Mbox */ @@ -594,6 +600,15 @@ struct rvu { spinlock_t cpt_intr_lock; struct mutex mbox_lock; /* Serialize mbox up and down msgs */ + u16 rep_pcifunc; + int rep_cnt; + u16 *rep2pfvf_map; + u8 rep_mode; + struct work_struct rep_evt_work; + struct workqueue_struct *rep_evt_wq; + struct list_head rep_evtq_head; + /* Representor event lock */ + spinlock_t rep_evtq_lock; }; static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val) @@ -852,6 +867,14 @@ bool is_sdp_pfvf(u16 pcifunc); bool is_sdp_pf(u16 pcifunc); bool is_sdp_vf(struct rvu *rvu, u16 pcifunc); +static inline bool is_rep_dev(struct rvu *rvu, u16 pcifunc) +{ + if (rvu->rep_pcifunc && rvu->rep_pcifunc == pcifunc) + return true; + + return false; +} + /* CGX APIs */ static inline bool is_pf_cgxmapped(struct rvu *rvu, u8 pf) { @@ -960,7 +983,11 @@ void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf); void rvu_npc_enable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf); void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf, int group, int alg_idx, int mcam_index); - +void __rvu_mcam_remove_counter_from_rule(struct rvu *rvu, u16 pcifunc, + struct rvu_npc_mcam_rule *rule); +void __rvu_mcam_add_counter_to_rule(struct rvu *rvu, u16 pcifunc, + struct rvu_npc_mcam_rule *rule, + struct npc_install_flow_rsp *rsp); void rvu_npc_get_mcam_entry_alloc_info(struct rvu *rvu, u16 pcifunc, int blkaddr, int *alloc_cnt, int *enable_cnt); @@ -985,6 +1012,7 @@ void npc_set_mcam_action(struct rvu *rvu, struct npc_mcam *mcam, void npc_read_mcam_entry(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr, u16 src, struct mcam_entry *entry, u8 *intf, u8 *ena); +int npc_config_cntr_default_entries(struct rvu *rvu, bool enable); bool is_cgx_config_permitted(struct rvu *rvu, u16 pcifunc); bool is_mac_feature_supported(struct rvu *rvu, int pf, int feature); u32 rvu_cgx_get_fifolen(struct rvu *rvu); @@ -1045,7 +1073,8 @@ int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int blkaddr); /* RVU Switch */ void rvu_switch_enable(struct rvu *rvu); void rvu_switch_disable(struct rvu *rvu); -void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc); +void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc, bool ena); +void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool ena); int rvu_npc_set_parse_mode(struct rvu *rvu, u16 pcifunc, u64 mode, u8 dir, u64 pkind, u8 var_len_off, u8 var_len_off_mask, @@ -1058,4 +1087,9 @@ int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc); void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena); void rvu_mcs_exit(struct rvu *rvu); +/* Representor APIs */ +int rvu_rep_pf_init(struct rvu *rvu); +int rvu_rep_install_mcam_rules(struct rvu *rvu); +void rvu_rep_update_rules(struct rvu *rvu, u16 pcifunc, bool ena); +int rvu_rep_notify_pfvf_state(struct rvu *rvu, u16 pcifunc, bool enable); #endif /* RVU_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c index 87ba77e5026a..148144f5b61d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c @@ -45,33 +45,6 @@ enum { CGX_STAT18, }; -/* NIX TX stats */ -enum nix_stat_lf_tx { - TX_UCAST = 0x0, - TX_BCAST = 0x1, - TX_MCAST = 0x2, - TX_DROP = 0x3, - TX_OCTS = 0x4, - TX_STATS_ENUM_LAST, -}; - -/* NIX RX stats */ -enum nix_stat_lf_rx { - RX_OCTS = 0x0, - RX_UCAST = 0x1, - RX_BCAST = 0x2, - RX_MCAST = 0x3, - RX_DROP = 0x4, - RX_DROP_OCTS = 0x5, - RX_FCS = 0x6, - RX_ERR = 0x7, - RX_DRP_BCAST = 0x8, - RX_DRP_MCAST = 0x9, - RX_DRP_L3BCAST = 0xa, - RX_DRP_L3MCAST = 0xb, - RX_STATS_ENUM_LAST, -}; - static char *cgx_rx_stats_fields[] = { [CGX_STAT0] = "Received packets", [CGX_STAT1] = "Octets of received packets", @@ -663,16 +636,16 @@ static ssize_t rvu_dbg_lmtst_map_table_display(struct file *filp, RVU_DEBUG_FOPS(lmtst_map_table, lmtst_map_table_display, NULL); -static void get_lf_str_list(struct rvu_block block, int pcifunc, +static void get_lf_str_list(const struct rvu_block *block, int pcifunc, char *lfs) { - int lf = 0, seq = 0, len = 0, prev_lf = block.lf.max; + int lf = 0, seq = 0, len = 0, prev_lf = block->lf.max; - for_each_set_bit(lf, block.lf.bmap, block.lf.max) { - if (lf >= block.lf.max) + for_each_set_bit(lf, block->lf.bmap, block->lf.max) { + if (lf >= block->lf.max) break; - if (block.fn_map[lf] != pcifunc) + if (block->fn_map[lf] != pcifunc) continue; if (lf == prev_lf + 1) { @@ -719,7 +692,7 @@ static int get_max_column_width(struct rvu *rvu) if (!strlen(block.name)) continue; - get_lf_str_list(block, pcifunc, buf); + get_lf_str_list(&block, pcifunc, buf); if (lf_str_size <= strlen(buf)) lf_str_size = strlen(buf) + 1; } @@ -803,7 +776,7 @@ static ssize_t rvu_dbg_rsrc_attach_status(struct file *filp, continue; len = 0; lfs[len] = '\0'; - get_lf_str_list(block, pcifunc, lfs); + get_lf_str_list(&block, pcifunc, lfs); if (strlen(lfs)) flag = 1; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c index 7498ab429963..dab4deca893f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c @@ -1238,6 +1238,7 @@ enum rvu_af_dl_param_id { RVU_AF_DEVLINK_PARAM_ID_DWRR_MTU, RVU_AF_DEVLINK_PARAM_ID_NPC_MCAM_ZONE_PERCENT, RVU_AF_DEVLINK_PARAM_ID_NPC_EXACT_FEATURE_DISABLE, + RVU_AF_DEVLINK_PARAM_ID_NPC_DEF_RULE_CNTR_ENABLE, RVU_AF_DEVLINK_PARAM_ID_NIX_MAXLF, }; @@ -1358,6 +1359,32 @@ static int rvu_af_dl_npc_mcam_high_zone_percent_validate(struct devlink *devlink return 0; } +static int rvu_af_dl_npc_def_rule_cntr_get(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx) +{ + struct rvu_devlink *rvu_dl = devlink_priv(devlink); + struct rvu *rvu = rvu_dl->rvu; + + ctx->val.vbool = rvu->def_rule_cntr_en; + + return 0; +} + +static int rvu_af_dl_npc_def_rule_cntr_set(struct devlink *devlink, u32 id, + struct devlink_param_gset_ctx *ctx, + struct netlink_ext_ack *extack) +{ + struct rvu_devlink *rvu_dl = devlink_priv(devlink); + struct rvu *rvu = rvu_dl->rvu; + int err; + + err = npc_config_cntr_default_entries(rvu, ctx->val.vbool); + if (!err) + rvu->def_rule_cntr_en = ctx->val.vbool; + + return err; +} + static int rvu_af_dl_nix_maxlf_get(struct devlink *devlink, u32 id, struct devlink_param_gset_ctx *ctx) { @@ -1444,6 +1471,11 @@ static const struct devlink_param rvu_af_dl_params[] = { rvu_af_dl_npc_mcam_high_zone_percent_get, rvu_af_dl_npc_mcam_high_zone_percent_set, rvu_af_dl_npc_mcam_high_zone_percent_validate), + DEVLINK_PARAM_DRIVER(RVU_AF_DEVLINK_PARAM_ID_NPC_DEF_RULE_CNTR_ENABLE, + "npc_def_rule_cntr", DEVLINK_PARAM_TYPE_BOOL, + BIT(DEVLINK_PARAM_CMODE_RUNTIME), + rvu_af_dl_npc_def_rule_cntr_get, + rvu_af_dl_npc_def_rule_cntr_set, NULL), DEVLINK_PARAM_DRIVER(RVU_AF_DEVLINK_PARAM_ID_NIX_MAXLF, "nix_maxlf", DEVLINK_PARAM_TYPE_U16, BIT(DEVLINK_PARAM_CMODE_RUNTIME), @@ -1468,6 +1500,9 @@ static int rvu_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode) struct rvu *rvu = rvu_dl->rvu; struct rvu_switch *rswitch; + if (rvu->rep_mode) + return -EOPNOTSUPP; + rswitch = &rvu->rswitch; *mode = rswitch->mode; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index da69350c6f76..5d5a01dbbca1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -31,6 +31,7 @@ static int nix_free_all_bandprof(struct rvu *rvu, u16 pcifunc); static void nix_clear_ratelimit_aggr(struct rvu *rvu, struct nix_hw *nix_hw, u32 leaf_prof); static const char *nix_get_ctx_name(int ctype); +static int nix_get_tx_link(struct rvu *rvu, u16 pcifunc); enum mc_tbl_sz { MC_TBL_SZ_256, @@ -312,7 +313,9 @@ static bool is_valid_txschq(struct rvu *rvu, int blkaddr, /* TLs aggegating traffic are shared across PF and VFs */ if (lvl >= hw->cap.nix_tx_aggr_lvl) { - if (rvu_get_pf(map_func) != rvu_get_pf(pcifunc)) + if ((nix_get_tx_link(rvu, map_func) != + nix_get_tx_link(rvu, pcifunc)) && + (rvu_get_pf(map_func) != rvu_get_pf(pcifunc))) return false; else return true; @@ -360,7 +363,6 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf, cgx_set_pkind(rvu_cgx_pdata(cgx_id, rvu), lmac_id, pkind); rvu_npc_set_pkind(rvu, pkind, pfvf); - break; case NIX_INTF_TYPE_LBK: vf = (pcifunc & RVU_PFVF_FUNC_MASK) - 1; @@ -584,6 +586,9 @@ int rvu_mbox_handler_nix_bp_disable(struct rvu *rvu, if (!is_pf_cgxmapped(rvu, pf) && type != NIX_INTF_TYPE_LBK) return 0; + if (is_sdp_pfvf(pcifunc)) + type = NIX_INTF_TYPE_SDP; + pfvf = rvu_get_pfvf(rvu, pcifunc); err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr); if (err) @@ -1614,6 +1619,12 @@ int rvu_mbox_handler_nix_lf_alloc(struct rvu *rvu, cfg = NPC_TX_DEF_PKIND; rvu_write64(rvu, blkaddr, NIX_AF_LFX_TX_PARSE_CFG(nixlf), cfg); + if (is_rep_dev(rvu, pcifunc)) { + pfvf->tx_chan_base = RVU_SWITCH_LBK_CHAN; + pfvf->tx_chan_cnt = 1; + goto exit; + } + intf = is_lbk_vf(rvu, pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX; if (is_sdp_pfvf(pcifunc)) intf = NIX_INTF_TYPE_SDP; @@ -1684,6 +1695,9 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req, if (nixlf < 0) return NIX_AF_ERR_AF_LF_INVALID; + if (is_rep_dev(rvu, pcifunc)) + goto free_lf; + if (req->flags & NIX_LF_DISABLE_FLOWS) rvu_npc_disable_mcam_entries(rvu, pcifunc, nixlf); else @@ -1695,6 +1709,7 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req, nix_interface_deinit(rvu, pcifunc, nixlf); +free_lf: /* Reset this NIX LF */ err = rvu_lf_reset(rvu, block, nixlf); if (err) { @@ -2007,7 +2022,8 @@ static void nix_get_txschq_range(struct rvu *rvu, u16 pcifunc, struct rvu_hwinfo *hw = rvu->hw; int pf = rvu_get_pf(pcifunc); - if (is_lbk_vf(rvu, pcifunc)) { /* LBK links */ + /* LBK links */ + if (is_lbk_vf(rvu, pcifunc) || is_rep_dev(rvu, pcifunc)) { *start = hw->cap.nix_txsch_per_cgx_lmac * link; *end = *start + hw->cap.nix_txsch_per_lbk_lmac; } else if (is_pf_cgxmapped(rvu, pf)) { /* CGX links */ @@ -2760,7 +2776,7 @@ void rvu_nix_tx_tl2_cfg(struct rvu *rvu, int blkaddr, u16 pcifunc, int schq; u64 cfg; - if (!is_pf_cgxmapped(rvu, pf)) + if (!is_pf_cgxmapped(rvu, pf) && !is_rep_dev(rvu, pcifunc)) return; cfg = enable ? (BIT_ULL(12) | RVU_SWITCH_LBK_CHAN) : 0; @@ -4393,8 +4409,6 @@ int rvu_mbox_handler_nix_set_mac_addr(struct rvu *rvu, if (test_bit(PF_SET_VF_TRUSTED, &pfvf->flags) && from_vf) ether_addr_copy(pfvf->default_mac, req->mac_addr); - rvu_switch_update_rules(rvu, pcifunc); - return 0; } @@ -4555,7 +4569,7 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req, if (!nix_hw) return NIX_AF_ERR_INVALID_NIXBLK; - if (is_lbk_vf(rvu, pcifunc)) + if (is_lbk_vf(rvu, pcifunc) || is_rep_dev(rvu, pcifunc)) rvu_get_lbk_link_max_frs(rvu, &max_mtu); else rvu_get_lmac_link_max_frs(rvu, &max_mtu); @@ -4583,6 +4597,8 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req, /* For VFs of PF0 ingress is LBK port, so config LBK link */ pfvf = rvu_get_pfvf(rvu, pcifunc); link = hw->cgx_links + pfvf->lbkid; + } else if (is_rep_dev(rvu, pcifunc)) { + link = hw->cgx_links + 0; } if (link < 0) @@ -4674,7 +4690,7 @@ static void nix_link_config(struct rvu *rvu, int blkaddr, if (hw->sdp_links) { link = hw->cgx_links + hw->lbk_links; rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link), - SDP_HW_MAX_FRS << 16 | NIC_HW_MIN_FRS); + SDP_HW_MAX_FRS << 16 | SDP_HW_MIN_FRS); } /* Get MCS external bypass status for CN10K-B */ @@ -5166,7 +5182,7 @@ int rvu_mbox_handler_nix_lf_start_rx(struct rvu *rvu, struct msg_req *req, { u16 pcifunc = req->hdr.pcifunc; struct rvu_pfvf *pfvf; - int nixlf, err; + int nixlf, err, pf; err = nix_get_nixlf(rvu, pcifunc, &nixlf, NULL); if (err) @@ -5182,7 +5198,11 @@ int rvu_mbox_handler_nix_lf_start_rx(struct rvu *rvu, struct msg_req *req, pfvf = rvu_get_pfvf(rvu, pcifunc); set_bit(NIXLF_INITIALIZED, &pfvf->flags); - rvu_switch_update_rules(rvu, pcifunc); + rvu_switch_update_rules(rvu, pcifunc, true); + + pf = rvu_get_pf(pcifunc); + if (is_pf_cgxmapped(rvu, pf) && rvu->rep_mode) + rvu_rep_notify_pfvf_state(rvu, pcifunc, true); return rvu_cgx_start_stop_io(rvu, pcifunc, true); } @@ -5192,7 +5212,7 @@ int rvu_mbox_handler_nix_lf_stop_rx(struct rvu *rvu, struct msg_req *req, { u16 pcifunc = req->hdr.pcifunc; struct rvu_pfvf *pfvf; - int nixlf, err; + int nixlf, err, pf; err = nix_get_nixlf(rvu, pcifunc, &nixlf, NULL); if (err) @@ -5210,8 +5230,12 @@ int rvu_mbox_handler_nix_lf_stop_rx(struct rvu *rvu, struct msg_req *req, if (err) return err; + rvu_switch_update_rules(rvu, pcifunc, false); rvu_cgx_tx_enable(rvu, pcifunc, true); + pf = rvu_get_pf(pcifunc); + if (is_pf_cgxmapped(rvu, pf) && rvu->rep_mode) + rvu_rep_notify_pfvf_state(rvu, pcifunc, false); return 0; } @@ -5239,6 +5263,9 @@ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf) clear_bit(NIXLF_INITIALIZED, &pfvf->flags); + if (is_pf_cgxmapped(rvu, pf) && rvu->rep_mode) + rvu_rep_notify_pfvf_state(rvu, pcifunc, false); + rvu_cgx_start_stop_io(rvu, pcifunc, false); if (pfvf->sq_ctx) { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c index 97722ce8c4cb..821fe242f821 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c @@ -2691,6 +2691,49 @@ void npc_mcam_rsrcs_reserve(struct rvu *rvu, int blkaddr, int entry_idx) npc_mcam_set_bit(mcam, entry_idx); } +int npc_config_cntr_default_entries(struct rvu *rvu, bool enable) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + struct npc_install_flow_rsp rsp; + struct rvu_npc_mcam_rule *rule; + int blkaddr; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + if (blkaddr < 0) + return -EINVAL; + + mutex_lock(&mcam->lock); + list_for_each_entry(rule, &mcam->mcam_rules, list) { + if (!is_mcam_entry_enabled(rvu, mcam, blkaddr, rule->entry)) + continue; + if (!rule->default_rule) + continue; + if (enable && !rule->has_cntr) { /* Alloc and map new counter */ + __rvu_mcam_add_counter_to_rule(rvu, rule->owner, + rule, &rsp); + if (rsp.counter < 0) { + dev_err(rvu->dev, + "%s: Failed to allocate cntr for default rule (err=%d)\n", + __func__, rsp.counter); + break; + } + npc_map_mcam_entry_and_cntr(rvu, mcam, blkaddr, + rule->entry, rsp.counter); + /* Reset counter before use */ + rvu_write64(rvu, blkaddr, + NPC_AF_MATCH_STATX(rule->cntr), 0x0); + } + + /* Free and unmap counter */ + if (!enable && rule->has_cntr) + __rvu_mcam_remove_counter_from_rule(rvu, rule->owner, + rule); + } + mutex_unlock(&mcam->lock); + + return 0; +} + int rvu_mbox_handler_npc_mcam_alloc_entry(struct rvu *rvu, struct npc_mcam_alloc_entry_req *req, struct npc_mcam_alloc_entry_rsp *rsp) @@ -2975,9 +3018,9 @@ int rvu_mbox_handler_npc_mcam_shift_entry(struct rvu *rvu, return rc; } -int rvu_mbox_handler_npc_mcam_alloc_counter(struct rvu *rvu, - struct npc_mcam_alloc_counter_req *req, - struct npc_mcam_alloc_counter_rsp *rsp) +static int __npc_mcam_alloc_counter(struct rvu *rvu, + struct npc_mcam_alloc_counter_req *req, + struct npc_mcam_alloc_counter_rsp *rsp) { struct npc_mcam *mcam = &rvu->hw->mcam; u16 pcifunc = req->hdr.pcifunc; @@ -2998,11 +3041,9 @@ int rvu_mbox_handler_npc_mcam_alloc_counter(struct rvu *rvu, if (!req->contig && req->count > NPC_MAX_NONCONTIG_COUNTERS) return NPC_MCAM_INVALID_REQ; - mutex_lock(&mcam->lock); /* Check if unused counters are available or not */ if (!rvu_rsrc_free_count(&mcam->counters)) { - mutex_unlock(&mcam->lock); return NPC_MCAM_ALLOC_FAILED; } @@ -3035,12 +3076,27 @@ int rvu_mbox_handler_npc_mcam_alloc_counter(struct rvu *rvu, } } - mutex_unlock(&mcam->lock); return 0; } -int rvu_mbox_handler_npc_mcam_free_counter(struct rvu *rvu, - struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp) +int rvu_mbox_handler_npc_mcam_alloc_counter(struct rvu *rvu, + struct npc_mcam_alloc_counter_req *req, + struct npc_mcam_alloc_counter_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + int err; + + mutex_lock(&mcam->lock); + + err = __npc_mcam_alloc_counter(rvu, req, rsp); + + mutex_unlock(&mcam->lock); + return err; +} + +static int __npc_mcam_free_counter(struct rvu *rvu, + struct npc_mcam_oper_counter_req *req, + struct msg_rsp *rsp) { struct npc_mcam *mcam = &rvu->hw->mcam; u16 index, entry = 0; @@ -3050,10 +3106,8 @@ int rvu_mbox_handler_npc_mcam_free_counter(struct rvu *rvu, if (blkaddr < 0) return NPC_MCAM_INVALID_REQ; - mutex_lock(&mcam->lock); err = npc_mcam_verify_counter(mcam, req->hdr.pcifunc, req->cntr); if (err) { - mutex_unlock(&mcam->lock); return err; } @@ -3077,10 +3131,66 @@ int rvu_mbox_handler_npc_mcam_free_counter(struct rvu *rvu, index, req->cntr); } - mutex_unlock(&mcam->lock); return 0; } +int rvu_mbox_handler_npc_mcam_free_counter(struct rvu *rvu, + struct npc_mcam_oper_counter_req *req, struct msg_rsp *rsp) +{ + struct npc_mcam *mcam = &rvu->hw->mcam; + int err; + + mutex_lock(&mcam->lock); + + err = __npc_mcam_free_counter(rvu, req, rsp); + + mutex_unlock(&mcam->lock); + + return err; +} + +void __rvu_mcam_remove_counter_from_rule(struct rvu *rvu, u16 pcifunc, + struct rvu_npc_mcam_rule *rule) +{ + struct npc_mcam_oper_counter_req free_req = { 0 }; + struct msg_rsp free_rsp; + + if (!rule->has_cntr) + return; + + free_req.hdr.pcifunc = pcifunc; + free_req.cntr = rule->cntr; + + __npc_mcam_free_counter(rvu, &free_req, &free_rsp); + rule->has_cntr = false; +} + +void __rvu_mcam_add_counter_to_rule(struct rvu *rvu, u16 pcifunc, + struct rvu_npc_mcam_rule *rule, + struct npc_install_flow_rsp *rsp) +{ + struct npc_mcam_alloc_counter_req cntr_req = { 0 }; + struct npc_mcam_alloc_counter_rsp cntr_rsp = { 0 }; + int err; + + cntr_req.hdr.pcifunc = pcifunc; + cntr_req.contig = true; + cntr_req.count = 1; + + /* we try to allocate a counter to track the stats of this + * rule. If counter could not be allocated then proceed + * without counter because counters are limited than entries. + */ + err = __npc_mcam_alloc_counter(rvu, &cntr_req, &cntr_rsp); + if (!err && cntr_rsp.count) { + rule->cntr = cntr_rsp.cntr; + rule->has_cntr = true; + rsp->counter = rule->cntr; + } else { + rsp->counter = err; + } +} + int rvu_mbox_handler_npc_mcam_unmap_counter(struct rvu *rvu, struct npc_mcam_unmap_counter_req *req, struct msg_rsp *rsp) { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c index 150635de2bd5..da69e454662a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c @@ -1081,44 +1081,26 @@ static void rvu_mcam_add_rule(struct npc_mcam *mcam, static void rvu_mcam_remove_counter_from_rule(struct rvu *rvu, u16 pcifunc, struct rvu_npc_mcam_rule *rule) { - struct npc_mcam_oper_counter_req free_req = { 0 }; - struct msg_rsp free_rsp; + struct npc_mcam *mcam = &rvu->hw->mcam; - if (!rule->has_cntr) - return; + mutex_lock(&mcam->lock); - free_req.hdr.pcifunc = pcifunc; - free_req.cntr = rule->cntr; + __rvu_mcam_remove_counter_from_rule(rvu, pcifunc, rule); - rvu_mbox_handler_npc_mcam_free_counter(rvu, &free_req, &free_rsp); - rule->has_cntr = false; + mutex_unlock(&mcam->lock); } static void rvu_mcam_add_counter_to_rule(struct rvu *rvu, u16 pcifunc, struct rvu_npc_mcam_rule *rule, struct npc_install_flow_rsp *rsp) { - struct npc_mcam_alloc_counter_req cntr_req = { 0 }; - struct npc_mcam_alloc_counter_rsp cntr_rsp = { 0 }; - int err; + struct npc_mcam *mcam = &rvu->hw->mcam; - cntr_req.hdr.pcifunc = pcifunc; - cntr_req.contig = true; - cntr_req.count = 1; + mutex_lock(&mcam->lock); - /* we try to allocate a counter to track the stats of this - * rule. If counter could not be allocated then proceed - * without counter because counters are limited than entries. - */ - err = rvu_mbox_handler_npc_mcam_alloc_counter(rvu, &cntr_req, - &cntr_rsp); - if (!err && cntr_rsp.count) { - rule->cntr = cntr_rsp.cntr; - rule->has_cntr = true; - rsp->counter = rule->cntr; - } else { - rsp->counter = err; - } + __rvu_mcam_add_counter_to_rule(rvu, pcifunc, rule, rsp); + + mutex_unlock(&mcam->lock); } static int npc_mcast_update_action_index(struct rvu *rvu, struct npc_install_flow_req *req, @@ -1416,6 +1398,7 @@ int rvu_mbox_handler_npc_install_flow(struct rvu *rvu, struct npc_install_flow_rsp *rsp) { bool from_vf = !!(req->hdr.pcifunc & RVU_PFVF_FUNC_MASK); + bool from_rep_dev = !!is_rep_dev(rvu, req->hdr.pcifunc); struct rvu_switch *rswitch = &rvu->rswitch; int blkaddr, nixlf, err; struct rvu_pfvf *pfvf; @@ -1472,14 +1455,19 @@ process_flow: /* AF installing for a PF/VF */ if (!req->hdr.pcifunc) target = req->vf; + /* PF installing for its VF */ - else if (!from_vf && req->vf) { + if (!from_vf && req->vf && !from_rep_dev) { target = (req->hdr.pcifunc & ~RVU_PFVF_FUNC_MASK) | req->vf; pf_set_vfs_mac = req->default_rule && (req->features & BIT_ULL(NPC_DMAC)); } - /* msg received from PF/VF */ + + /* Representor device installing for a representee */ + if (from_rep_dev && req->vf) + target = req->vf; else + /* msg received from PF/VF */ target = req->hdr.pcifunc; /* ignore chan_mask in case pf func is not AF, revisit later */ @@ -1492,8 +1480,10 @@ process_flow: pfvf = rvu_get_pfvf(rvu, target); + if (from_rep_dev) + req->channel = pfvf->rx_chan_base; /* PF installing for its VF */ - if (req->hdr.pcifunc && !from_vf && req->vf) + if (req->hdr.pcifunc && !from_vf && req->vf && !from_rep_dev) set_bit(PF_SET_VF_CFG, &pfvf->flags); /* update req destination mac addr */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index 2b299fa85159..62cdc714ba57 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -445,6 +445,7 @@ #define NIX_CONST_MAX_BPIDS GENMASK_ULL(23, 12) #define NIX_CONST_SDP_CHANS GENMASK_ULL(11, 0) +#define NIX_VLAN_ETYPE_MASK GENMASK_ULL(63, 48) #define NIX_AF_MDQ_PARENT_MASK GENMASK_ULL(24, 16) #define NIX_AF_TL4_PARENT_MASK GENMASK_ULL(23, 16) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c new file mode 100644 index 000000000000..052ae5923e3a --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c @@ -0,0 +1,468 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell RVU Admin Function driver + * + * Copyright (C) 2024 Marvell. + * + */ + +#include <linux/bitfield.h> +#include <linux/types.h> +#include <linux/device.h> +#include <linux/module.h> +#include <linux/pci.h> + +#include "rvu.h" +#include "rvu_reg.h" + +#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ +static struct _req_type __maybe_unused \ +*otx2_mbox_alloc_msg_ ## _fn_name(struct rvu *rvu, int devid) \ +{ \ + struct _req_type *req; \ + \ + req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \ + &rvu->afpf_wq_info.mbox_up, devid, sizeof(struct _req_type), \ + sizeof(struct _rsp_type)); \ + if (!req) \ + return NULL; \ + req->hdr.sig = OTX2_MBOX_REQ_SIG; \ + req->hdr.id = _id; \ + return req; \ +} + +MBOX_UP_REP_MESSAGES +#undef M + +static int rvu_rep_up_notify(struct rvu *rvu, struct rep_event *event) +{ + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, event->pcifunc); + struct rep_event *msg; + int pf; + + pf = rvu_get_pf(event->pcifunc); + + if (event->event & RVU_EVENT_MAC_ADDR_CHANGE) + ether_addr_copy(pfvf->mac_addr, event->evt_data.mac); + + mutex_lock(&rvu->mbox_lock); + msg = otx2_mbox_alloc_msg_rep_event_up_notify(rvu, pf); + if (!msg) { + mutex_unlock(&rvu->mbox_lock); + return -ENOMEM; + } + + msg->hdr.pcifunc = event->pcifunc; + msg->event = event->event; + + memcpy(&msg->evt_data, &event->evt_data, sizeof(struct rep_evt_data)); + + otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pf); + + otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf); + + mutex_unlock(&rvu->mbox_lock); + return 0; +} + +static void rvu_rep_wq_handler(struct work_struct *work) +{ + struct rvu *rvu = container_of(work, struct rvu, rep_evt_work); + struct rep_evtq_ent *qentry; + struct rep_event *event; + unsigned long flags; + + do { + spin_lock_irqsave(&rvu->rep_evtq_lock, flags); + qentry = list_first_entry_or_null(&rvu->rep_evtq_head, + struct rep_evtq_ent, + node); + if (qentry) + list_del(&qentry->node); + + spin_unlock_irqrestore(&rvu->rep_evtq_lock, flags); + if (!qentry) + break; /* nothing more to process */ + + event = &qentry->event; + + rvu_rep_up_notify(rvu, event); + kfree(qentry); + } while (1); +} + +int rvu_mbox_handler_rep_event_notify(struct rvu *rvu, struct rep_event *req, + struct msg_rsp *rsp) +{ + struct rep_evtq_ent *qentry; + + qentry = kmalloc(sizeof(*qentry), GFP_ATOMIC); + if (!qentry) + return -ENOMEM; + + qentry->event = *req; + spin_lock(&rvu->rep_evtq_lock); + list_add_tail(&qentry->node, &rvu->rep_evtq_head); + spin_unlock(&rvu->rep_evtq_lock); + queue_work(rvu->rep_evt_wq, &rvu->rep_evt_work); + return 0; +} + +int rvu_rep_notify_pfvf_state(struct rvu *rvu, u16 pcifunc, bool enable) +{ + struct rep_event *req; + int pf; + + if (!is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc))) + return 0; + + pf = rvu_get_pf(rvu->rep_pcifunc); + + mutex_lock(&rvu->mbox_lock); + req = otx2_mbox_alloc_msg_rep_event_up_notify(rvu, pf); + if (!req) { + mutex_unlock(&rvu->mbox_lock); + return -ENOMEM; + } + + req->hdr.pcifunc = rvu->rep_pcifunc; + req->event |= RVU_EVENT_PFVF_STATE; + req->pcifunc = pcifunc; + req->evt_data.vf_state = enable; + + otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pf); + otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf); + + mutex_unlock(&rvu->mbox_lock); + return 0; +} + +#define RVU_LF_RX_STATS(reg) \ + rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_STATX(nixlf, reg)) + +#define RVU_LF_TX_STATS(reg) \ + rvu_read64(rvu, blkaddr, NIX_AF_LFX_TX_STATX(nixlf, reg)) + +int rvu_mbox_handler_nix_lf_stats(struct rvu *rvu, + struct nix_stats_req *req, + struct nix_stats_rsp *rsp) +{ + u16 pcifunc = req->pcifunc; + int nixlf, blkaddr, err; + struct msg_req rst_req; + struct msg_rsp rst_rsp; + + err = nix_get_nixlf(rvu, pcifunc, &nixlf, &blkaddr); + if (err) + return 0; + + if (req->reset) { + rst_req.hdr.pcifunc = pcifunc; + return rvu_mbox_handler_nix_stats_rst(rvu, &rst_req, &rst_rsp); + } + rsp->rx.octs = RVU_LF_RX_STATS(RX_OCTS); + rsp->rx.ucast = RVU_LF_RX_STATS(RX_UCAST); + rsp->rx.bcast = RVU_LF_RX_STATS(RX_BCAST); + rsp->rx.mcast = RVU_LF_RX_STATS(RX_MCAST); + rsp->rx.drop = RVU_LF_RX_STATS(RX_DROP); + rsp->rx.err = RVU_LF_RX_STATS(RX_ERR); + rsp->rx.drop_octs = RVU_LF_RX_STATS(RX_DROP_OCTS); + rsp->rx.drop_mcast = RVU_LF_RX_STATS(RX_DRP_MCAST); + rsp->rx.drop_bcast = RVU_LF_RX_STATS(RX_DRP_BCAST); + + rsp->tx.octs = RVU_LF_TX_STATS(TX_OCTS); + rsp->tx.ucast = RVU_LF_TX_STATS(TX_UCAST); + rsp->tx.bcast = RVU_LF_TX_STATS(TX_BCAST); + rsp->tx.mcast = RVU_LF_TX_STATS(TX_MCAST); + rsp->tx.drop = RVU_LF_TX_STATS(TX_DROP); + + rsp->pcifunc = req->pcifunc; + return 0; +} + +static u16 rvu_rep_get_vlan_id(struct rvu *rvu, u16 pcifunc) +{ + int id; + + for (id = 0; id < rvu->rep_cnt; id++) + if (rvu->rep2pfvf_map[id] == pcifunc) + return id; + return 0; +} + +static int rvu_rep_tx_vlan_cfg(struct rvu *rvu, u16 pcifunc, + u16 vlan_tci, int *vidx) +{ + struct nix_vtag_config_rsp rsp = {}; + struct nix_vtag_config req = {}; + u64 etype = ETH_P_8021Q; + int err; + + /* Insert vlan tag */ + req.hdr.pcifunc = pcifunc; + req.vtag_size = VTAGSIZE_T4; + req.cfg_type = 0; /* tx vlan cfg */ + req.tx.cfg_vtag0 = true; + req.tx.vtag0 = FIELD_PREP(NIX_VLAN_ETYPE_MASK, etype) | vlan_tci; + + err = rvu_mbox_handler_nix_vtag_cfg(rvu, &req, &rsp); + if (err) { + dev_err(rvu->dev, "Tx vlan config failed\n"); + return err; + } + *vidx = rsp.vtag0_idx; + return 0; +} + +static int rvu_rep_rx_vlan_cfg(struct rvu *rvu, u16 pcifunc) +{ + struct nix_vtag_config req = {}; + struct nix_vtag_config_rsp rsp; + + /* config strip, capture and size */ + req.hdr.pcifunc = pcifunc; + req.vtag_size = VTAGSIZE_T4; + req.cfg_type = 1; /* rx vlan cfg */ + req.rx.vtag_type = NIX_AF_LFX_RX_VTAG_TYPE0; + req.rx.strip_vtag = true; + req.rx.capture_vtag = false; + + return rvu_mbox_handler_nix_vtag_cfg(rvu, &req, &rsp); +} + +static int rvu_rep_install_rx_rule(struct rvu *rvu, u16 pcifunc, + u16 entry, bool rte) +{ + struct npc_install_flow_req req = {}; + struct npc_install_flow_rsp rsp = {}; + struct rvu_pfvf *pfvf; + u16 vlan_tci, rep_id; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + + /* To steer the traffic from Representee to Representor */ + rep_id = rvu_rep_get_vlan_id(rvu, pcifunc); + if (rte) { + vlan_tci = rep_id | BIT_ULL(8); + req.vf = rvu->rep_pcifunc; + req.op = NIX_RX_ACTIONOP_UCAST; + req.index = rep_id; + } else { + vlan_tci = rep_id; + req.vf = pcifunc; + req.op = NIX_RX_ACTION_DEFAULT; + } + + rvu_rep_rx_vlan_cfg(rvu, req.vf); + req.entry = entry; + req.hdr.pcifunc = 0; /* AF is requester */ + req.features = BIT_ULL(NPC_OUTER_VID) | BIT_ULL(NPC_VLAN_ETYPE_CTAG); + req.vtag0_valid = true; + req.vtag0_type = NIX_AF_LFX_RX_VTAG_TYPE0; + req.packet.vlan_etype = cpu_to_be16(ETH_P_8021Q); + req.mask.vlan_etype = cpu_to_be16(ETH_P_8021Q); + req.packet.vlan_tci = cpu_to_be16(vlan_tci); + req.mask.vlan_tci = cpu_to_be16(0xffff); + + req.channel = RVU_SWITCH_LBK_CHAN; + req.chan_mask = 0xffff; + req.intf = pfvf->nix_rx_intf; + + return rvu_mbox_handler_npc_install_flow(rvu, &req, &rsp); +} + +static int rvu_rep_install_tx_rule(struct rvu *rvu, u16 pcifunc, u16 entry, + bool rte) +{ + struct npc_install_flow_req req = {}; + struct npc_install_flow_rsp rsp = {}; + struct rvu_pfvf *pfvf; + int vidx, err; + u16 vlan_tci; + u8 lbkid; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + vlan_tci = rvu_rep_get_vlan_id(rvu, pcifunc); + if (rte) + vlan_tci |= BIT_ULL(8); + + err = rvu_rep_tx_vlan_cfg(rvu, pcifunc, vlan_tci, &vidx); + if (err) + return err; + + lbkid = pfvf->nix_blkaddr == BLKADDR_NIX0 ? 0 : 1; + req.hdr.pcifunc = 0; /* AF is requester */ + if (rte) { + req.vf = pcifunc; + } else { + req.vf = rvu->rep_pcifunc; + req.packet.sq_id = vlan_tci; + req.mask.sq_id = 0xffff; + } + + req.entry = entry; + req.intf = pfvf->nix_tx_intf; + req.op = NIX_TX_ACTIONOP_UCAST_CHAN; + req.index = (lbkid << 8) | RVU_SWITCH_LBK_CHAN; + req.set_cntr = 1; + req.vtag0_def = vidx; + req.vtag0_op = 1; + return rvu_mbox_handler_npc_install_flow(rvu, &req, &rsp); +} + +int rvu_rep_install_mcam_rules(struct rvu *rvu) +{ + struct rvu_switch *rswitch = &rvu->rswitch; + u16 start = rswitch->start_entry; + struct rvu_hwinfo *hw = rvu->hw; + u16 pcifunc, entry = 0; + int pf, vf, numvfs; + int err, nixlf, i; + u8 rep; + + for (pf = 1; pf < hw->total_pfs; pf++) { + if (!is_pf_cgxmapped(rvu, pf)) + continue; + + pcifunc = pf << RVU_PFVF_PF_SHIFT; + rvu_get_nix_blkaddr(rvu, pcifunc); + rep = true; + for (i = 0; i < 2; i++) { + err = rvu_rep_install_rx_rule(rvu, pcifunc, + start + entry, rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] = pcifunc; + + err = rvu_rep_install_tx_rule(rvu, pcifunc, + start + entry, rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] = pcifunc; + rep = false; + } + + rvu_get_pf_numvfs(rvu, pf, &numvfs, NULL); + for (vf = 0; vf < numvfs; vf++) { + pcifunc = pf << RVU_PFVF_PF_SHIFT | + ((vf + 1) & RVU_PFVF_FUNC_MASK); + rvu_get_nix_blkaddr(rvu, pcifunc); + + /* Skip installimg rules if nixlf is not attached */ + err = nix_get_nixlf(rvu, pcifunc, &nixlf, NULL); + if (err) + continue; + rep = true; + for (i = 0; i < 2; i++) { + err = rvu_rep_install_rx_rule(rvu, pcifunc, + start + entry, + rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] = pcifunc; + + err = rvu_rep_install_tx_rule(rvu, pcifunc, + start + entry, + rep); + if (err) + return err; + rswitch->entry2pcifunc[entry++] = pcifunc; + rep = false; + } + } + } + + /* Initialize the wq for handling REP events */ + spin_lock_init(&rvu->rep_evtq_lock); + INIT_LIST_HEAD(&rvu->rep_evtq_head); + INIT_WORK(&rvu->rep_evt_work, rvu_rep_wq_handler); + rvu->rep_evt_wq = alloc_workqueue("rep_evt_wq", 0, 0); + if (!rvu->rep_evt_wq) { + dev_err(rvu->dev, "REP workqueue allocation failed\n"); + return -ENOMEM; + } + return 0; +} + +void rvu_rep_update_rules(struct rvu *rvu, u16 pcifunc, bool ena) +{ + struct rvu_switch *rswitch = &rvu->rswitch; + struct npc_mcam *mcam = &rvu->hw->mcam; + u32 max = rswitch->used_entries; + int blkaddr; + u16 entry; + + if (!rswitch->used_entries) + return; + + blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); + + if (blkaddr < 0) + return; + + rvu_switch_enable_lbk_link(rvu, pcifunc, ena); + mutex_lock(&mcam->lock); + for (entry = 0; entry < max; entry++) { + if (rswitch->entry2pcifunc[entry] == pcifunc) + npc_enable_mcam_entry(rvu, mcam, blkaddr, entry, ena); + } + mutex_unlock(&mcam->lock); +} + +int rvu_rep_pf_init(struct rvu *rvu) +{ + u16 pcifunc = rvu->rep_pcifunc; + struct rvu_pfvf *pfvf; + + pfvf = rvu_get_pfvf(rvu, pcifunc); + set_bit(NIXLF_INITIALIZED, &pfvf->flags); + rvu_switch_enable_lbk_link(rvu, pcifunc, true); + rvu_rep_rx_vlan_cfg(rvu, pcifunc); + return 0; +} + +int rvu_mbox_handler_esw_cfg(struct rvu *rvu, struct esw_cfg_req *req, + struct msg_rsp *rsp) +{ + if (req->hdr.pcifunc != rvu->rep_pcifunc) + return 0; + + rvu->rep_mode = req->ena; + + if (!rvu->rep_mode) + rvu_npc_free_mcam_entries(rvu, req->hdr.pcifunc, -1); + + return 0; +} + +int rvu_mbox_handler_get_rep_cnt(struct rvu *rvu, struct msg_req *req, + struct get_rep_cnt_rsp *rsp) +{ + int pf, vf, numvfs, hwvf, rep = 0; + u16 pcifunc; + + rvu->rep_pcifunc = req->hdr.pcifunc; + rsp->rep_cnt = rvu->cgx_mapped_pfs + rvu->cgx_mapped_vfs; + rvu->rep_cnt = rsp->rep_cnt; + + rvu->rep2pfvf_map = devm_kzalloc(rvu->dev, rvu->rep_cnt * + sizeof(u16), GFP_KERNEL); + if (!rvu->rep2pfvf_map) + return -ENOMEM; + + for (pf = 0; pf < rvu->hw->total_pfs; pf++) { + if (!is_pf_cgxmapped(rvu, pf)) + continue; + pcifunc = pf << RVU_PFVF_PF_SHIFT; + rvu->rep2pfvf_map[rep] = pcifunc; + rsp->rep_pf_map[rep] = pcifunc; + rep++; + rvu_get_pf_numvfs(rvu, pf, &numvfs, &hwvf); + for (vf = 0; vf < numvfs; vf++) { + rvu->rep2pfvf_map[rep] = pcifunc | + ((vf + 1) & RVU_PFVF_FUNC_MASK); + rsp->rep_pf_map[rep] = rvu->rep2pfvf_map[rep]; + rep++; + } + } + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h index fc8da2090657..77ac94cb2ec4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h @@ -823,4 +823,30 @@ enum nix_tx_vtag_op { #define VTAG_STRIP BIT_ULL(4) #define VTAG_CAPTURE BIT_ULL(5) +/* NIX TX stats */ +enum nix_stat_lf_tx { + TX_UCAST = 0x0, + TX_BCAST = 0x1, + TX_MCAST = 0x2, + TX_DROP = 0x3, + TX_OCTS = 0x4, + TX_STATS_ENUM_LAST, +}; + +/* NIX RX stats */ +enum nix_stat_lf_rx { + RX_OCTS = 0x0, + RX_UCAST = 0x1, + RX_BCAST = 0x2, + RX_MCAST = 0x3, + RX_DROP = 0x4, + RX_DROP_OCTS = 0x5, + RX_FCS = 0x6, + RX_ERR = 0x7, + RX_DRP_BCAST = 0x8, + RX_DRP_MCAST = 0x9, + RX_DRP_L3BCAST = 0xa, + RX_DRP_L3MCAST = 0xb, + RX_STATS_ENUM_LAST, +}; #endif /* RVU_STRUCT_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c index 854045ed3b06..268efb7c1c15 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c @@ -8,7 +8,7 @@ #include <linux/bitfield.h> #include "rvu.h" -static void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool enable) +void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool enable) { struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); struct nix_hw *nix_hw; @@ -166,6 +166,8 @@ void rvu_switch_enable(struct rvu *rvu) alloc_req.contig = true; alloc_req.count = rvu->cgx_mapped_pfs + rvu->cgx_mapped_vfs; + if (rvu->rep_mode) + alloc_req.count = alloc_req.count * 4; ret = rvu_mbox_handler_npc_mcam_alloc_entry(rvu, &alloc_req, &alloc_rsp); if (ret) { @@ -189,7 +191,12 @@ void rvu_switch_enable(struct rvu *rvu) rswitch->used_entries = alloc_rsp.count; rswitch->start_entry = alloc_rsp.entry; - ret = rvu_switch_install_rules(rvu); + if (rvu->rep_mode) { + rvu_rep_pf_init(rvu); + ret = rvu_rep_install_mcam_rules(rvu); + } else { + ret = rvu_switch_install_rules(rvu); + } if (ret) goto uninstall_rules; @@ -222,6 +229,9 @@ void rvu_switch_disable(struct rvu *rvu) if (!rswitch->used_entries) return; + if (rvu->rep_mode) + goto free_ents; + for (pf = 1; pf < hw->total_pfs; pf++) { if (!is_pf_cgxmapped(rvu, pf)) continue; @@ -249,6 +259,7 @@ void rvu_switch_disable(struct rvu *rvu) } } +free_ents: uninstall_req.start = rswitch->start_entry; uninstall_req.end = rswitch->start_entry + rswitch->used_entries - 1; free_req.all = 1; @@ -258,12 +269,15 @@ void rvu_switch_disable(struct rvu *rvu) kfree(rswitch->entry2pcifunc); } -void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc) +void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc, bool ena) { struct rvu_switch *rswitch = &rvu->rswitch; u32 max = rswitch->used_entries; u16 entry; + if (rvu->rep_mode) + return rvu_rep_update_rules(rvu, pcifunc, ena); + if (!rswitch->used_entries) return; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile index 64a97a0a10ed..dbc971266865 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile +++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile @@ -5,11 +5,13 @@ obj-$(CONFIG_OCTEONTX2_PF) += rvu_nicpf.o otx2_ptp.o obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o otx2_ptp.o +obj-$(CONFIG_RVU_ESWITCH) += rvu_rep.o rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \ otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \ otx2_devlink.o qos_sq.o qos.o rvu_nicvf-y := otx2_vf.o +rvu_rep-y := rep.o rvu_nicpf-$(CONFIG_DCB) += otx2_dcbnl.o rvu_nicpf-$(CONFIG_MACSEC) += cn10k_macsec.o diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c index c1c99d7054f8..a15cc86635d6 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c @@ -72,7 +72,7 @@ int cn10k_lmtst_init(struct otx2_nic *pfvf) } EXPORT_SYMBOL(cn10k_lmtst_init); -int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) +int cn10k_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura) { struct nix_cn10k_aq_enq_req *aq; struct otx2_nic *pfvf = dev; @@ -88,7 +88,7 @@ int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) aq->sq.ena = 1; aq->sq.smq = otx2_get_smq_idx(pfvf, qidx); aq->sq.smq_rr_weight = mtu_to_dwrr_weight(pfvf, pfvf->tx_max_pktlen); - aq->sq.default_chan = pfvf->hw.tx_chan_base; + aq->sq.default_chan = pfvf->hw.tx_chan_base + chan_offset; aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */ aq->sq.sqb_aura = sqb_aura; aq->sq.sq_int_ena = NIX_SQINT_BITS; @@ -203,6 +203,11 @@ int cn10k_alloc_leaf_profile(struct otx2_nic *pfvf, u16 *leaf) rsp = (struct nix_bandprof_alloc_rsp *) otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + rc = PTR_ERR(rsp); + goto out; + } + if (!rsp->prof_count[BAND_PROF_LEAF_LAYER]) { rc = -EIO; goto out; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h index c1861f7de254..e3f0bce9908f 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h @@ -26,7 +26,7 @@ static inline int mtu_to_dwrr_weight(struct otx2_nic *pfvf, int mtu) int cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx); -int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); +int cn10k_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura); int cn10k_lmtst_init(struct otx2_nic *pfvf); int cn10k_free_all_ipolicers(struct otx2_nic *pfvf); int cn10k_alloc_matchall_ipolicer(struct otx2_nic *pfvf); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 87d5776e3b88..523ecb798a7a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -83,6 +83,7 @@ int otx2_update_rq_stats(struct otx2_nic *pfvf, int qidx) otx2_nix_rq_op_stats(&rq->stats, pfvf, qidx); return 1; } +EXPORT_SYMBOL(otx2_update_rq_stats); int otx2_update_sq_stats(struct otx2_nic *pfvf, int qidx) { @@ -99,6 +100,7 @@ int otx2_update_sq_stats(struct otx2_nic *pfvf, int qidx) otx2_nix_sq_op_stats(&sq->stats, pfvf, qidx); return 1; } +EXPORT_SYMBOL(otx2_update_sq_stats); void otx2_get_dev_stats(struct otx2_nic *pfvf) { @@ -227,7 +229,7 @@ int otx2_hw_set_mtu(struct otx2_nic *pfvf, int mtu) u16 maxlen; int err; - maxlen = otx2_get_max_mtu(pfvf) + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN; + maxlen = pfvf->hw.max_mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN; mutex_lock(&pfvf->mbox.lock); req = otx2_mbox_alloc_msg_nix_set_hw_frs(&pfvf->mbox); @@ -236,7 +238,7 @@ int otx2_hw_set_mtu(struct otx2_nic *pfvf, int mtu) return -ENOMEM; } - req->maxlen = pfvf->netdev->mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN; + req->maxlen = mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN; /* Use max receive length supported by hardware for loopback devices */ if (is_otx2_lbkvf(pfvf->pdev)) @@ -246,13 +248,14 @@ int otx2_hw_set_mtu(struct otx2_nic *pfvf, int mtu) mutex_unlock(&pfvf->mbox.lock); return err; } +EXPORT_SYMBOL(otx2_hw_set_mtu); int otx2_config_pause_frm(struct otx2_nic *pfvf) { struct cgx_pause_frm_cfg *req; int err; - if (is_otx2_lbkvf(pfvf->pdev)) + if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev)) return 0; mutex_lock(&pfvf->mbox.lock); @@ -646,12 +649,22 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool txschq_for req->reg[2] = NIX_AF_MDQX_SCHEDULE(schq); req->regval[2] = dwrr_val; } else if (lvl == NIX_TXSCH_LVL_TL4) { + int sdp_chan = hw->tx_chan_base + prio; + + if (is_otx2_sdp_rep(pfvf->pdev)) + prio = 0; parent = schq_list[NIX_TXSCH_LVL_TL3][prio]; req->reg[0] = NIX_AF_TL4X_PARENT(schq); req->regval[0] = (u64)parent << 16; req->num_regs++; req->reg[1] = NIX_AF_TL4X_SCHEDULE(schq); req->regval[1] = dwrr_val; + if (is_otx2_sdp_rep(pfvf->pdev)) { + req->num_regs++; + req->reg[2] = NIX_AF_TL4X_SDP_LINK_CFG(schq); + req->regval[2] = BIT_ULL(12) | BIT_ULL(13) | + (sdp_chan & 0xff); + } } else if (lvl == NIX_TXSCH_LVL_TL3) { parent = schq_list[NIX_TXSCH_LVL_TL2][prio]; req->reg[0] = NIX_AF_TL3X_PARENT(schq); @@ -659,7 +672,8 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool txschq_for req->num_regs++; req->reg[1] = NIX_AF_TL3X_SCHEDULE(schq); req->regval[1] = dwrr_val; - if (lvl == hw->txschq_link_cfg_lvl) { + if (lvl == hw->txschq_link_cfg_lvl && + !is_otx2_sdp_rep(pfvf->pdev)) { req->num_regs++; req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link); /* Enable this queue and backpressure @@ -676,7 +690,8 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool txschq_for req->reg[1] = NIX_AF_TL2X_SCHEDULE(schq); req->regval[1] = (u64)hw->txschq_aggr_lvl_rr_prio << 24 | dwrr_val; - if (lvl == hw->txschq_link_cfg_lvl) { + if (lvl == hw->txschq_link_cfg_lvl && + !is_otx2_sdp_rep(pfvf->pdev)) { req->num_regs++; req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link); /* Enable this queue and backpressure @@ -735,6 +750,7 @@ EXPORT_SYMBOL(otx2_smq_flush); int otx2_txsch_alloc(struct otx2_nic *pfvf) { + int chan_cnt = pfvf->hw.tx_chan_cnt; struct nix_txsch_alloc_req *req; struct nix_txsch_alloc_rsp *rsp; int lvl, schq, rc; @@ -747,6 +763,12 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf) /* Request one schq per level */ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) req->schq[lvl] = 1; + + if (is_otx2_sdp_rep(pfvf->pdev) && chan_cnt > 1) { + req->schq[NIX_TXSCH_LVL_SMQ] = chan_cnt; + req->schq[NIX_TXSCH_LVL_TL4] = chan_cnt; + } + rc = otx2_sync_mbox_msg(&pfvf->mbox); if (rc) return rc; @@ -757,10 +779,12 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf) return PTR_ERR(rsp); /* Setup transmit scheduler list */ - for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + pfvf->hw.txschq_cnt[lvl] = rsp->schq[lvl]; for (schq = 0; schq < rsp->schq[lvl]; schq++) pfvf->hw.txschq_list[lvl][schq] = rsp->schq_list[lvl][schq]; + } pfvf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; pfvf->hw.txschq_aggr_lvl_rr_prio = rsp->aggr_lvl_rr_prio; @@ -798,12 +822,15 @@ EXPORT_SYMBOL(otx2_txschq_free_one); void otx2_txschq_stop(struct otx2_nic *pfvf) { - int lvl, schq; + int lvl, schq, idx; /* free non QOS TLx nodes */ - for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) - otx2_txschq_free_one(pfvf, lvl, - pfvf->hw.txschq_list[lvl][0]); + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + for (idx = 0; idx < pfvf->hw.txschq_cnt[lvl]; idx++) { + otx2_txschq_free_one(pfvf, lvl, + pfvf->hw.txschq_list[lvl][idx]); + } + } /* Clear the txschq list */ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { @@ -883,7 +910,7 @@ static int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura) return otx2_sync_mbox_msg(&pfvf->mbox); } -int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) +int otx2_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura) { struct otx2_nic *pfvf = dev; struct otx2_snd_queue *sq; @@ -902,7 +929,7 @@ int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) aq->sq.ena = 1; aq->sq.smq = otx2_get_smq_idx(pfvf, qidx); aq->sq.smq_rr_quantum = mtu_to_dwrr_weight(pfvf, pfvf->tx_max_pktlen); - aq->sq.default_chan = pfvf->hw.tx_chan_base; + aq->sq.default_chan = pfvf->hw.tx_chan_base + chan_offset; aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */ aq->sq.sqb_aura = sqb_aura; aq->sq.sq_int_ena = NIX_SQINT_BITS; @@ -925,6 +952,7 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) struct otx2_qset *qset = &pfvf->qset; struct otx2_snd_queue *sq; struct otx2_pool *pool; + u8 chan_offset; int err; pool = &pfvf->qset.pool[sqb_aura]; @@ -971,7 +999,8 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura) sq->stats.bytes = 0; sq->stats.pkts = 0; - err = pfvf->hw_ops->sq_aq_init(pfvf, qidx, sqb_aura); + chan_offset = qidx % pfvf->hw.tx_chan_cnt; + err = pfvf->hw_ops->sq_aq_init(pfvf, qidx, chan_offset, sqb_aura); if (err) { kfree(sq->sg); sq->sg = NULL; @@ -1738,6 +1767,8 @@ void mbox_handler_nix_lf_alloc(struct otx2_nic *pfvf, pfvf->hw.sqb_size = rsp->sqb_size; pfvf->hw.rx_chan_base = rsp->rx_chan_base; pfvf->hw.tx_chan_base = rsp->tx_chan_base; + pfvf->hw.rx_chan_cnt = rsp->rx_chan_cnt; + pfvf->hw.tx_chan_cnt = rsp->tx_chan_cnt; pfvf->hw.lso_tsov4_idx = rsp->lso_tsov4_idx; pfvf->hw.lso_tsov6_idx = rsp->lso_tsov6_idx; pfvf->hw.cgx_links = rsp->cgx_links; @@ -1782,6 +1813,7 @@ void otx2_free_cints(struct otx2_nic *pfvf, int n) free_irq(vector, &qset->napi[qidx]); } } +EXPORT_SYMBOL(otx2_free_cints); void otx2_set_cints_affinity(struct otx2_nic *pfvf) { @@ -1837,6 +1869,10 @@ u16 otx2_get_max_mtu(struct otx2_nic *pfvf) if (!rc) { rsp = (struct nix_hw_info *) otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + rc = PTR_ERR(rsp); + goto out; + } /* HW counts VLAN insertion bytes (8 for double tag) * irrespective of whether SQE is requesting to insert VLAN diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index f27a3456ae64..566848663fea 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -29,6 +29,7 @@ #include "otx2_devlink.h" #include <rvu_trace.h> #include "qos.h" +#include "rep.h" /* IPv4 flag more fragment bit */ #define IPV4_FLAG_MORE 0x20 @@ -41,6 +42,8 @@ #define PCI_SUBSYS_DEVID_96XX_RVU_PFVF 0xB200 #define PCI_SUBSYS_DEVID_CN10K_B_RVU_PFVF 0xBD00 +#define PCI_DEVID_OCTEONTX2_SDP_REP 0xA0F7 + /* PCI BAR nos */ #define PCI_CFG_REG_BAR_NUM 2 #define PCI_MBOX_BAR_NUM 4 @@ -120,33 +123,6 @@ enum otx2_errcodes_re { ERRCODE_IL4_CSUM = 0x22, }; -/* NIX TX stats */ -enum nix_stat_lf_tx { - TX_UCAST = 0x0, - TX_BCAST = 0x1, - TX_MCAST = 0x2, - TX_DROP = 0x3, - TX_OCTS = 0x4, - TX_STATS_ENUM_LAST, -}; - -/* NIX RX stats */ -enum nix_stat_lf_rx { - RX_OCTS = 0x0, - RX_UCAST = 0x1, - RX_BCAST = 0x2, - RX_MCAST = 0x3, - RX_DROP = 0x4, - RX_DROP_OCTS = 0x5, - RX_FCS = 0x6, - RX_ERR = 0x7, - RX_DRP_BCAST = 0x8, - RX_DRP_MCAST = 0x9, - RX_DRP_L3BCAST = 0xa, - RX_DRP_L3MCAST = 0xb, - RX_STATS_ENUM_LAST, -}; - struct otx2_dev_stats { u64 rx_bytes; u64 rx_frames; @@ -224,15 +200,19 @@ struct otx2_hw { /* NIX */ u8 txschq_link_cfg_lvl; + u8 txschq_cnt[NIX_TXSCH_LVL_CNT]; u8 txschq_aggr_lvl_rr_prio; u16 txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; u16 matchall_ipolicer; u32 dwrr_mtu; + u32 max_mtu; u8 smq_link_type; /* HW settings, coalescing etc */ u16 rx_chan_base; u16 tx_chan_base; + u8 rx_chan_cnt; + u8 tx_chan_cnt; u16 cq_qcount_wait; u16 cq_ecount_wait; u16 rq_skid; @@ -367,7 +347,8 @@ struct otx2_flow_config { }; struct dev_hw_ops { - int (*sq_aq_init)(void *dev, u16 qidx, u16 sqb_aura); + int (*sq_aq_init)(void *dev, u16 qidx, u8 chan_offset, + u16 sqb_aura); void (*sqe_flush)(void *dev, struct otx2_snd_queue *sq, int size, int qidx); int (*refill_pool_ptrs)(void *dev, struct otx2_cq_queue *cq); @@ -465,6 +446,8 @@ struct otx2_nic { #define OTX2_FLAG_PTP_ONESTEP_SYNC BIT_ULL(15) #define OTX2_FLAG_ADPTV_INT_COAL_ENABLED BIT_ULL(16) #define OTX2_FLAG_TC_MARK_ENABLED BIT_ULL(17) +#define OTX2_FLAG_REP_MODE_ENABLED BIT_ULL(18) +#define OTX2_FLAG_PORT_UP BIT_ULL(19) u64 flags; u64 *cq_op_addr; @@ -532,11 +515,19 @@ struct otx2_nic { #if IS_ENABLED(CONFIG_MACSEC) struct cn10k_mcs_cfg *macsec_cfg; #endif + +#if IS_ENABLED(CONFIG_RVU_ESWITCH) + struct rep_dev **reps; + int rep_cnt; + u16 rep_pf_map[RVU_MAX_REP]; + u16 esw_mode; +#endif }; static inline bool is_otx2_lbkvf(struct pci_dev *pdev) { - return pdev->device == PCI_DEVID_OCTEONTX2_RVU_AFVF; + return (pdev->device == PCI_DEVID_OCTEONTX2_RVU_AFVF) || + (pdev->device == PCI_DEVID_RVU_REP); } static inline bool is_96xx_A0(struct pci_dev *pdev) @@ -551,6 +542,11 @@ static inline bool is_96xx_B0(struct pci_dev *pdev) (pdev->subsystem_device == PCI_SUBSYS_DEVID_96XX_RVU_PFVF); } +static inline bool is_otx2_sdp_rep(struct pci_dev *pdev) +{ + return pdev->device == PCI_DEVID_OCTEONTX2_SDP_REP; +} + /* REVID for PCIe devices. * Bits 0..1: minor pass, bit 3..2: major pass * bits 7..4: midr id @@ -913,15 +909,19 @@ static inline void otx2_dma_unmap_page(struct otx2_nic *pfvf, static inline u16 otx2_get_smq_idx(struct otx2_nic *pfvf, u16 qidx) { u16 smq; + int idx; + #ifdef CONFIG_DCB if (qidx < NIX_PF_PFC_PRIO_MAX && pfvf->pfc_alloc_status[qidx]) return pfvf->pfc_schq_list[NIX_TXSCH_LVL_SMQ][qidx]; #endif /* check if qidx falls under QOS queues */ - if (qidx >= pfvf->hw.non_qos_queues) + if (qidx >= pfvf->hw.non_qos_queues) { smq = pfvf->qos.qid_to_sqmap[qidx - pfvf->hw.non_qos_queues]; - else - smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0]; + } else { + idx = qidx % pfvf->hw.txschq_cnt[NIX_TXSCH_LVL_SMQ]; + smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][idx]; + } return smq; } @@ -988,14 +988,28 @@ int otx2_nix_config_bp(struct otx2_nic *pfvf, bool enable); void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, int qidx); void otx2_cleanup_tx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq); int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura); -int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); -int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); +int otx2_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura); +int cn10k_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura); int otx2_alloc_buffer(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, dma_addr_t *dma); int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id, int stack_pages, int numptrs, int buf_size, int type); int otx2_aura_init(struct otx2_nic *pfvf, int aura_id, int pool_id, int numptrs); +int otx2_init_rsrc(struct pci_dev *pdev, struct otx2_nic *pf); +void otx2_free_queue_mem(struct otx2_qset *qset); +int otx2_alloc_queue_mem(struct otx2_nic *pf); +int otx2_init_hw_resources(struct otx2_nic *pfvf); +void otx2_free_hw_resources(struct otx2_nic *pf); +int otx2_wq_init(struct otx2_nic *pf); +int otx2_check_pf_usable(struct otx2_nic *pf); +int otx2_pfaf_mbox_init(struct otx2_nic *pf); +int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af); +int otx2_realloc_msix_vectors(struct otx2_nic *pf); +void otx2_pfaf_mbox_destroy(struct otx2_nic *pf); +void otx2_disable_mbox_intr(struct otx2_nic *pf); +void otx2_disable_napi(struct otx2_nic *pf); +irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq); /* RSS configuration APIs*/ int otx2_rss_init(struct otx2_nic *pfvf); @@ -1127,4 +1141,12 @@ u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb, int otx2_get_txq_by_classid(struct otx2_nic *pfvf, u16 classid); void otx2_qos_config_txschq(struct otx2_nic *pfvf); void otx2_clean_qos_queues(struct otx2_nic *pfvf); +int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info); +int otx2_setup_tc_cls_flower(struct otx2_nic *nic, + struct flow_cls_offload *cls_flower); + +static inline int mcam_entry_cmp(const void *a, const void *b) +{ + return *(u16 *)a - *(u16 *)b; +} #endif /* OTX2_COMMON_H */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c index aa01110f04a3..294fba58b670 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c @@ -315,6 +315,11 @@ int otx2_config_priority_flow_ctrl(struct otx2_nic *pfvf) if (!otx2_sync_mbox_msg(&pfvf->mbox)) { rsp = (struct cgx_pfc_rsp *) otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + err = PTR_ERR(rsp); + goto unlock; + } + if (req->rx_pause != rsp->rx_pause || req->tx_pause != rsp->tx_pause) { dev_warn(pfvf->dev, "Failed to config PFC\n"); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c index 53f14aa944bd..33ec9a7f7c03 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c @@ -141,7 +141,56 @@ static const struct devlink_param otx2_dl_params[] = { otx2_dl_ucast_flt_cnt_validate), }; +#ifdef CONFIG_RVU_ESWITCH +static int otx2_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode) +{ + struct otx2_devlink *otx2_dl = devlink_priv(devlink); + struct otx2_nic *pfvf = otx2_dl->pfvf; + + if (!otx2_rep_dev(pfvf->pdev)) + return -EOPNOTSUPP; + + *mode = pfvf->esw_mode; + + return 0; +} + +static int otx2_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode, + struct netlink_ext_ack *extack) +{ + struct otx2_devlink *otx2_dl = devlink_priv(devlink); + struct otx2_nic *pfvf = otx2_dl->pfvf; + int ret = 0; + + if (!otx2_rep_dev(pfvf->pdev)) + return -EOPNOTSUPP; + + if (pfvf->esw_mode == mode) + return 0; + + switch (mode) { + case DEVLINK_ESWITCH_MODE_LEGACY: + rvu_rep_destroy(pfvf); + break; + case DEVLINK_ESWITCH_MODE_SWITCHDEV: + ret = rvu_rep_create(pfvf, extack); + break; + default: + return -EINVAL; + } + + if (!ret) + pfvf->esw_mode = mode; + + return ret; +} +#endif + static const struct devlink_ops otx2_devlink_ops = { +#ifdef CONFIG_RVU_ESWITCH + .eswitch_mode_get = otx2_devlink_eswitch_mode_get, + .eswitch_mode_set = otx2_devlink_eswitch_mode_set, +#endif }; int otx2_register_dl(struct otx2_nic *pfvf) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c index 80d853b343f9..2046dd0da00d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c @@ -28,6 +28,11 @@ static int otx2_dmacflt_do_add(struct otx2_nic *pf, const u8 *mac, if (!err) { rsp = (struct cgx_mac_addr_add_rsp *) otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + mutex_unlock(&pf->mbox.lock); + return PTR_ERR(rsp); + } + *dmac_index = rsp->index; } @@ -200,6 +205,10 @@ int otx2_dmacflt_update(struct otx2_nic *pf, u8 *mac, u32 bit_pos) rsp = (struct cgx_mac_addr_update_rsp *) otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + rc = PTR_ERR(rsp); + goto out; + } pf->flow_cfg->bmap_to_dmacindex[bit_pos] = rsp->index; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c index 32468c663605..2d53dc77ef1e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c @@ -85,26 +85,22 @@ static void otx2_get_qset_strings(struct otx2_nic *pfvf, u8 **data, int qset) int start_qidx = qset * pfvf->hw.rx_queues; int qidx, stats; - for (qidx = 0; qidx < pfvf->hw.rx_queues; qidx++) { - for (stats = 0; stats < otx2_n_queue_stats; stats++) { - sprintf(*data, "rxq%d: %s", qidx + start_qidx, - otx2_queue_stats[stats].name); - *data += ETH_GSTRING_LEN; - } - } + for (qidx = 0; qidx < pfvf->hw.rx_queues; qidx++) + for (stats = 0; stats < otx2_n_queue_stats; stats++) + ethtool_sprintf(data, "rxq%d: %s", qidx + start_qidx, + otx2_queue_stats[stats].name); - for (qidx = 0; qidx < otx2_get_total_tx_queues(pfvf); qidx++) { - for (stats = 0; stats < otx2_n_queue_stats; stats++) { + for (qidx = 0; qidx < otx2_get_total_tx_queues(pfvf); qidx++) + for (stats = 0; stats < otx2_n_queue_stats; stats++) if (qidx >= pfvf->hw.non_qos_queues) - sprintf(*data, "txq_qos%d: %s", - qidx + start_qidx - pfvf->hw.non_qos_queues, - otx2_queue_stats[stats].name); + ethtool_sprintf(data, "txq_qos%d: %s", + qidx + start_qidx - + pfvf->hw.non_qos_queues, + otx2_queue_stats[stats].name); else - sprintf(*data, "txq%d: %s", qidx + start_qidx, - otx2_queue_stats[stats].name); - *data += ETH_GSTRING_LEN; - } - } + ethtool_sprintf(data, "txq%d: %s", + qidx + start_qidx, + otx2_queue_stats[stats].name); } static void otx2_get_strings(struct net_device *netdev, u32 sset, u8 *data) @@ -115,36 +111,25 @@ static void otx2_get_strings(struct net_device *netdev, u32 sset, u8 *data) if (sset != ETH_SS_STATS) return; - for (stats = 0; stats < otx2_n_dev_stats; stats++) { - memcpy(data, otx2_dev_stats[stats].name, ETH_GSTRING_LEN); - data += ETH_GSTRING_LEN; - } + for (stats = 0; stats < otx2_n_dev_stats; stats++) + ethtool_puts(&data, otx2_dev_stats[stats].name); - for (stats = 0; stats < otx2_n_drv_stats; stats++) { - memcpy(data, otx2_drv_stats[stats].name, ETH_GSTRING_LEN); - data += ETH_GSTRING_LEN; - } + for (stats = 0; stats < otx2_n_drv_stats; stats++) + ethtool_puts(&data, otx2_drv_stats[stats].name); otx2_get_qset_strings(pfvf, &data, 0); if (!test_bit(CN10K_RPM, &pfvf->hw.cap_flag)) { - for (stats = 0; stats < CGX_RX_STATS_COUNT; stats++) { - sprintf(data, "cgx_rxstat%d: ", stats); - data += ETH_GSTRING_LEN; - } + for (stats = 0; stats < CGX_RX_STATS_COUNT; stats++) + ethtool_sprintf(&data, "cgx_rxstat%d: ", stats); - for (stats = 0; stats < CGX_TX_STATS_COUNT; stats++) { - sprintf(data, "cgx_txstat%d: ", stats); - data += ETH_GSTRING_LEN; - } + for (stats = 0; stats < CGX_TX_STATS_COUNT; stats++) + ethtool_sprintf(&data, "cgx_txstat%d: ", stats); } - strcpy(data, "reset_count"); - data += ETH_GSTRING_LEN; - sprintf(data, "Fec Corrected Errors: "); - data += ETH_GSTRING_LEN; - sprintf(data, "Fec Uncorrected Errors: "); - data += ETH_GSTRING_LEN; + ethtool_puts(&data, "reset_count"); + ethtool_puts(&data, "Fec Corrected Errors: "); + ethtool_puts(&data, "Fec Uncorrected Errors: "); } static void otx2_get_qset_stats(struct otx2_nic *pfvf, @@ -343,6 +328,11 @@ static void otx2_get_pauseparam(struct net_device *netdev, if (!otx2_sync_mbox_msg(&pfvf->mbox)) { rsp = (struct cgx_pause_frm_cfg *) otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + mutex_unlock(&pfvf->mbox.lock); + return; + } + pause->rx_pause = rsp->rx_pause; pause->tx_pause = rsp->tx_pause; } @@ -1072,6 +1062,11 @@ static int otx2_set_fecparam(struct net_device *netdev, rsp = (struct fec_mode *)otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + err = PTR_ERR(rsp); + goto end; + } + if (rsp->fec >= 0) pfvf->linfo.fec = rsp->fec; else @@ -1365,20 +1360,15 @@ static void otx2vf_get_strings(struct net_device *netdev, u32 sset, u8 *data) if (sset != ETH_SS_STATS) return; - for (stats = 0; stats < otx2_n_dev_stats; stats++) { - memcpy(data, otx2_dev_stats[stats].name, ETH_GSTRING_LEN); - data += ETH_GSTRING_LEN; - } + for (stats = 0; stats < otx2_n_dev_stats; stats++) + ethtool_puts(&data, otx2_dev_stats[stats].name); - for (stats = 0; stats < otx2_n_drv_stats; stats++) { - memcpy(data, otx2_drv_stats[stats].name, ETH_GSTRING_LEN); - data += ETH_GSTRING_LEN; - } + for (stats = 0; stats < otx2_n_drv_stats; stats++) + ethtool_puts(&data, otx2_drv_stats[stats].name); otx2_get_qset_strings(vf, &data, 0); - strcpy(data, "reset_count"); - data += ETH_GSTRING_LEN; + ethtool_puts(&data, "reset_count"); } static void otx2vf_get_ethtool_stats(struct net_device *netdev, diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c index 98c31a16c70b..47bfd1fb37d4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c @@ -64,11 +64,6 @@ static int otx2_free_ntuple_mcam_entries(struct otx2_nic *pfvf) return 0; } -static int mcam_entry_cmp(const void *a, const void *b) -{ - return *(u16 *)a - *(u16 *)b; -} - int otx2_alloc_mcam_entries(struct otx2_nic *pfvf, u16 count) { struct otx2_flow_config *flow_cfg = pfvf->flow_cfg; @@ -119,6 +114,8 @@ int otx2_alloc_mcam_entries(struct otx2_nic *pfvf, u16 count) rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp (&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) + goto exit; for (ent = 0; ent < rsp->count; ent++) flow_cfg->flow_ent[ent + allocated] = rsp->entry_list[ent]; @@ -197,6 +194,10 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf) rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp (&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) { + mutex_unlock(&pfvf->mbox.lock); + return PTR_ERR(rsp); + } if (rsp->count != req->count) { netdev_info(pfvf->netdev, @@ -232,6 +233,10 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf) frsp = (struct npc_get_field_status_rsp *)otx2_mbox_get_rsp (&pfvf->mbox.mbox, 0, &freq->hdr); + if (IS_ERR(frsp)) { + mutex_unlock(&pfvf->mbox.lock); + return PTR_ERR(frsp); + } if (frsp->enable) { pfvf->flags |= OTX2_FLAG_RX_VLAN_SUPPORT; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 5492dea547a1..e310f99b1736 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -519,6 +519,7 @@ static void otx2_pfvf_mbox_up_handler(struct work_struct *work) switch (msg->id) { case MBOX_MSG_CGX_LINK_EVENT: + case MBOX_MSG_REP_EVENT_UP_NOTIFY: break; default: if (msg->rc) @@ -832,6 +833,9 @@ static void otx2_handle_link_event(struct otx2_nic *pf) struct cgx_link_user_info *linfo = &pf->linfo; struct net_device *netdev = pf->netdev; + if (pf->flags & OTX2_FLAG_PORT_UP) + return; + pr_info("%s NIC Link is %s %d Mbps %s duplex\n", netdev->name, linfo->link_up ? "UP" : "DOWN", linfo->speed, linfo->full_duplex ? "Full" : "Half"); @@ -844,6 +848,35 @@ static void otx2_handle_link_event(struct otx2_nic *pf) } } +static int otx2_mbox_up_handler_rep_event_up_notify(struct otx2_nic *pf, + struct rep_event *info, + struct msg_rsp *rsp) +{ + struct net_device *netdev = pf->netdev; + + if (info->event == RVU_EVENT_MTU_CHANGE) { + netdev->mtu = info->evt_data.mtu; + return 0; + } + + if (info->event == RVU_EVENT_PORT_STATE) { + if (info->evt_data.port_state) { + pf->flags |= OTX2_FLAG_PORT_UP; + netif_carrier_on(netdev); + netif_tx_start_all_queues(netdev); + } else { + pf->flags &= ~OTX2_FLAG_PORT_UP; + netif_tx_stop_all_queues(netdev); + netif_carrier_off(netdev); + } + return 0; + } +#ifdef CONFIG_RVU_ESWITCH + rvu_event_up_notify(pf, info); +#endif + return 0; +} + int otx2_mbox_up_handler_mcs_intr_notify(struct otx2_nic *pf, struct mcs_intr_info *event, struct msg_rsp *rsp) @@ -913,6 +946,7 @@ static int otx2_process_mbox_msg_up(struct otx2_nic *pf, } MBOX_UP_CGX_MESSAGES MBOX_UP_MCS_MESSAGES +MBOX_UP_REP_MESSAGES #undef M break; default: @@ -1008,7 +1042,7 @@ static irqreturn_t otx2_pfaf_mbox_intr_handler(int irq, void *pf_irq) return IRQ_HANDLED; } -static void otx2_disable_mbox_intr(struct otx2_nic *pf) +void otx2_disable_mbox_intr(struct otx2_nic *pf) { int vector = pci_irq_vector(pf->pdev, RVU_PF_INT_VEC_AFPF_MBOX); @@ -1016,8 +1050,9 @@ static void otx2_disable_mbox_intr(struct otx2_nic *pf) otx2_write64(pf, RVU_PF_INT_ENA_W1C, BIT_ULL(0)); free_irq(vector, pf); } +EXPORT_SYMBOL(otx2_disable_mbox_intr); -static int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af) +int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af) { struct otx2_hw *hw = &pf->hw; struct msg_req *req; @@ -1061,7 +1096,7 @@ static int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af) return 0; } -static void otx2_pfaf_mbox_destroy(struct otx2_nic *pf) +void otx2_pfaf_mbox_destroy(struct otx2_nic *pf) { struct mbox *mbox = &pf->mbox; @@ -1076,8 +1111,9 @@ static void otx2_pfaf_mbox_destroy(struct otx2_nic *pf) otx2_mbox_destroy(&mbox->mbox); otx2_mbox_destroy(&mbox->mbox_up); } +EXPORT_SYMBOL(otx2_pfaf_mbox_destroy); -static int otx2_pfaf_mbox_init(struct otx2_nic *pf) +int otx2_pfaf_mbox_init(struct otx2_nic *pf) { struct mbox *mbox = &pf->mbox; void __iomem *hwbase; @@ -1379,7 +1415,7 @@ done: return IRQ_HANDLED; } -static irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq) +irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq) { struct otx2_cq_poll *cq_poll = (struct otx2_cq_poll *)cq_irq; struct otx2_nic *pf = (struct otx2_nic *)cq_poll->dev; @@ -1398,20 +1434,25 @@ static irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq) return IRQ_HANDLED; } +EXPORT_SYMBOL(otx2_cq_intr_handler); -static void otx2_disable_napi(struct otx2_nic *pf) +void otx2_disable_napi(struct otx2_nic *pf) { struct otx2_qset *qset = &pf->qset; struct otx2_cq_poll *cq_poll; + struct work_struct *work; int qidx; for (qidx = 0; qidx < pf->hw.cint_cnt; qidx++) { cq_poll = &qset->napi[qidx]; - cancel_work_sync(&cq_poll->dim.work); + work = &cq_poll->dim.work; + if (work->func) + cancel_work_sync(work); napi_disable(&cq_poll->napi); netif_napi_del(&cq_poll->napi); } } +EXPORT_SYMBOL(otx2_disable_napi); static void otx2_free_cq_res(struct otx2_nic *pf) { @@ -1477,7 +1518,7 @@ static int otx2_get_rbuf_size(struct otx2_nic *pf, int mtu) return ALIGN(rbuf_size, 2048); } -static int otx2_init_hw_resources(struct otx2_nic *pf) +int otx2_init_hw_resources(struct otx2_nic *pf) { struct nix_lf_free_req *free_req; struct mbox *mbox = &pf->mbox; @@ -1493,10 +1534,11 @@ static int otx2_init_hw_resources(struct otx2_nic *pf) hw->sqpool_cnt = otx2_get_total_tx_queues(pf); hw->pool_cnt = hw->rqpool_cnt + hw->sqpool_cnt; - /* Maximum hardware supported transmit length */ - pf->tx_max_pktlen = pf->netdev->max_mtu + OTX2_ETH_HLEN; - - pf->rbsize = otx2_get_rbuf_size(pf, pf->netdev->mtu); + if (!otx2_rep_dev(pf->pdev)) { + /* Maximum hardware supported transmit length */ + pf->tx_max_pktlen = pf->netdev->max_mtu + OTX2_ETH_HLEN; + pf->rbsize = otx2_get_rbuf_size(pf, pf->netdev->mtu); + } mutex_lock(&mbox->lock); /* NPA init */ @@ -1549,10 +1591,15 @@ static int otx2_init_hw_resources(struct otx2_nic *pf) } for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { - err = otx2_txschq_config(pf, lvl, 0, false); - if (err) { - mutex_unlock(&mbox->lock); - goto err_free_nix_queues; + int idx; + + for (idx = 0; idx < pf->hw.txschq_cnt[lvl]; idx++) { + err = otx2_txschq_config(pf, lvl, idx, false); + if (err) { + dev_err(pf->dev, "Failed to config TXSCH\n"); + mutex_unlock(&mbox->lock); + goto err_free_nix_queues; + } } } @@ -1601,8 +1648,9 @@ exit: mutex_unlock(&mbox->lock); return err; } +EXPORT_SYMBOL(otx2_init_hw_resources); -static void otx2_free_hw_resources(struct otx2_nic *pf) +void otx2_free_hw_resources(struct otx2_nic *pf) { struct otx2_qset *qset = &pf->qset; struct nix_lf_free_req *free_req; @@ -1624,11 +1672,12 @@ static void otx2_free_hw_resources(struct otx2_nic *pf) otx2_pfc_txschq_stop(pf); #endif - otx2_clean_qos_queues(pf); + if (!otx2_rep_dev(pf->pdev)) + otx2_clean_qos_queues(pf); mutex_lock(&mbox->lock); /* Disable backpressure */ - if (!(pf->pcifunc & RVU_PFVF_FUNC_MASK)) + if (!is_otx2_lbkvf(pf->pdev)) otx2_nix_config_bp(pf, false); mutex_unlock(&mbox->lock); @@ -1660,7 +1709,8 @@ static void otx2_free_hw_resources(struct otx2_nic *pf) otx2_free_cq_res(pf); /* Free all ingress bandwidth profiles allocated */ - cn10k_free_all_ipolicers(pf); + if (!otx2_rep_dev(pf->pdev)) + cn10k_free_all_ipolicers(pf); mutex_lock(&mbox->lock); /* Reset NIX LF */ @@ -1688,6 +1738,7 @@ static void otx2_free_hw_resources(struct otx2_nic *pf) } mutex_unlock(&mbox->lock); } +EXPORT_SYMBOL(otx2_free_hw_resources); static bool otx2_promisc_use_mce_list(struct otx2_nic *pfvf) { @@ -1770,15 +1821,24 @@ static void otx2_dim_work(struct work_struct *w) dim->state = DIM_START_MEASURE; } -int otx2_open(struct net_device *netdev) +void otx2_free_queue_mem(struct otx2_qset *qset) +{ + kfree(qset->sq); + qset->sq = NULL; + kfree(qset->cq); + qset->cq = NULL; + kfree(qset->rq); + qset->rq = NULL; + kfree(qset->napi); + qset->napi = NULL; +} +EXPORT_SYMBOL(otx2_free_queue_mem); + +int otx2_alloc_queue_mem(struct otx2_nic *pf) { - struct otx2_nic *pf = netdev_priv(netdev); - struct otx2_cq_poll *cq_poll = NULL; struct otx2_qset *qset = &pf->qset; - int err = 0, qidx, vec; - char *irq_name; + struct otx2_cq_poll *cq_poll; - netif_carrier_off(netdev); /* RQ and SQs are mapped to different CQs, * so find out max CQ IRQs (i.e CINTs) needed. @@ -1798,7 +1858,6 @@ int otx2_open(struct net_device *netdev) /* CQ size of SQ */ qset->sqe_cnt = qset->sqe_cnt ? qset->sqe_cnt : Q_COUNT(Q_SIZE_4K); - err = -ENOMEM; qset->cq = kcalloc(pf->qset.cq_cnt, sizeof(struct otx2_cq_queue), GFP_KERNEL); if (!qset->cq) @@ -1814,6 +1873,28 @@ int otx2_open(struct net_device *netdev) if (!qset->rq) goto err_free_mem; + return 0; + +err_free_mem: + otx2_free_queue_mem(qset); + return -ENOMEM; +} +EXPORT_SYMBOL(otx2_alloc_queue_mem); + +int otx2_open(struct net_device *netdev) +{ + struct otx2_nic *pf = netdev_priv(netdev); + struct otx2_cq_poll *cq_poll = NULL; + struct otx2_qset *qset = &pf->qset; + int err = 0, qidx, vec; + char *irq_name; + + netif_carrier_off(netdev); + + err = otx2_alloc_queue_mem(pf); + if (err) + return err; + err = otx2_init_hw_resources(pf); if (err) goto err_free_mem; @@ -1932,6 +2013,7 @@ int otx2_open(struct net_device *netdev) } pf->flags &= ~OTX2_FLAG_INTF_DOWN; + pf->flags &= ~OTX2_FLAG_PORT_UP; /* 'intf_down' may be checked on any cpu */ smp_wmb(); @@ -1979,10 +2061,7 @@ err_disable_napi: otx2_disable_napi(pf); otx2_free_hw_resources(pf); err_free_mem: - kfree(qset->sq); - kfree(qset->cq); - kfree(qset->rq); - kfree(qset->napi); + otx2_free_queue_mem(qset); return err; } EXPORT_SYMBOL(otx2_open); @@ -2047,11 +2126,7 @@ int otx2_stop(struct net_device *netdev) for (qidx = 0; qidx < netdev->num_tx_queues; qidx++) netdev_tx_reset_queue(netdev_get_tx_queue(netdev, qidx)); - - kfree(qset->sq); - kfree(qset->cq); - kfree(qset->rq); - kfree(qset->napi); + otx2_free_queue_mem(qset); /* Do not clear RQ/SQ ringsize settings */ memset_startat(qset, 0, sqe_cnt); return 0; @@ -2081,7 +2156,7 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev) sq = &pf->qset.sq[sq_idx]; txq = netdev_get_tx_queue(netdev, qidx); - if (!otx2_sq_append_skb(netdev, sq, skb, qidx)) { + if (!otx2_sq_append_skb(pf, txq, sq, skb, qidx)) { netif_tx_stop_queue(txq); /* Check again, incase SQBs got freed up */ @@ -2786,7 +2861,7 @@ static const struct net_device_ops otx2_netdev_ops = { .ndo_set_vf_trust = otx2_ndo_set_vf_trust, }; -static int otx2_wq_init(struct otx2_nic *pf) +int otx2_wq_init(struct otx2_nic *pf) { pf->otx2_wq = create_singlethread_workqueue("otx2_wq"); if (!pf->otx2_wq) @@ -2797,7 +2872,7 @@ static int otx2_wq_init(struct otx2_nic *pf) return 0; } -static int otx2_check_pf_usable(struct otx2_nic *nic) +int otx2_check_pf_usable(struct otx2_nic *nic) { u64 rev; @@ -2815,7 +2890,7 @@ static int otx2_check_pf_usable(struct otx2_nic *nic) return 0; } -static int otx2_realloc_msix_vectors(struct otx2_nic *pf) +int otx2_realloc_msix_vectors(struct otx2_nic *pf) { struct otx2_hw *hw = &pf->hw; int num_vec, err; @@ -2837,6 +2912,7 @@ static int otx2_realloc_msix_vectors(struct otx2_nic *pf) return otx2_register_mbox_intr(pf, false); } +EXPORT_SYMBOL(otx2_realloc_msix_vectors); static int otx2_sriov_vfcfg_init(struct otx2_nic *pf) { @@ -2872,6 +2948,88 @@ static void otx2_sriov_vfcfg_cleanup(struct otx2_nic *pf) } } +int otx2_init_rsrc(struct pci_dev *pdev, struct otx2_nic *pf) +{ + struct device *dev = &pdev->dev; + struct otx2_hw *hw = &pf->hw; + int num_vec, err; + + num_vec = pci_msix_vec_count(pdev); + hw->irq_name = devm_kmalloc_array(&hw->pdev->dev, num_vec, NAME_SIZE, + GFP_KERNEL); + if (!hw->irq_name) + return -ENOMEM; + + hw->affinity_mask = devm_kcalloc(&hw->pdev->dev, num_vec, + sizeof(cpumask_var_t), GFP_KERNEL); + if (!hw->affinity_mask) + return -ENOMEM; + + /* Map CSRs */ + pf->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0); + if (!pf->reg_base) { + dev_err(dev, "Unable to map physical function CSRs, aborting\n"); + return -ENOMEM; + } + + err = otx2_check_pf_usable(pf); + if (err) + return err; + + err = pci_alloc_irq_vectors(hw->pdev, RVU_PF_INT_VEC_CNT, + RVU_PF_INT_VEC_CNT, PCI_IRQ_MSIX); + if (err < 0) { + dev_err(dev, "%s: Failed to alloc %d IRQ vectors\n", + __func__, num_vec); + return err; + } + + otx2_setup_dev_hw_settings(pf); + + /* Init PF <=> AF mailbox stuff */ + err = otx2_pfaf_mbox_init(pf); + if (err) + goto err_free_irq_vectors; + + /* Register mailbox interrupt */ + err = otx2_register_mbox_intr(pf, true); + if (err) + goto err_mbox_destroy; + + /* Request AF to attach NPA and NIX LFs to this PF. + * NIX and NPA LFs are needed for this PF to function as a NIC. + */ + err = otx2_attach_npa_nix(pf); + if (err) + goto err_disable_mbox_intr; + + err = otx2_realloc_msix_vectors(pf); + if (err) + goto err_detach_rsrc; + + err = cn10k_lmtst_init(pf); + if (err) + goto err_detach_rsrc; + + return 0; + +err_detach_rsrc: + if (pf->hw.lmt_info) + free_percpu(pf->hw.lmt_info); + if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) + qmem_free(pf->dev, pf->dync_lmt); + otx2_detach_resources(&pf->mbox); +err_disable_mbox_intr: + otx2_disable_mbox_intr(pf); +err_mbox_destroy: + otx2_pfaf_mbox_destroy(pf); +err_free_irq_vectors: + pci_free_irq_vectors(hw->pdev); + + return err; +} +EXPORT_SYMBOL(otx2_init_rsrc); + static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct device *dev = &pdev->dev; @@ -2879,7 +3037,6 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) struct net_device *netdev; struct otx2_nic *pf; struct otx2_hw *hw; - int num_vec; err = pcim_enable_device(pdev); if (err) { @@ -2930,72 +3087,14 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) /* Use CQE of 128 byte descriptor size by default */ hw->xqe_size = 128; - num_vec = pci_msix_vec_count(pdev); - hw->irq_name = devm_kmalloc_array(&hw->pdev->dev, num_vec, NAME_SIZE, - GFP_KERNEL); - if (!hw->irq_name) { - err = -ENOMEM; - goto err_free_netdev; - } - - hw->affinity_mask = devm_kcalloc(&hw->pdev->dev, num_vec, - sizeof(cpumask_var_t), GFP_KERNEL); - if (!hw->affinity_mask) { - err = -ENOMEM; - goto err_free_netdev; - } - - /* Map CSRs */ - pf->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0); - if (!pf->reg_base) { - dev_err(dev, "Unable to map physical function CSRs, aborting\n"); - err = -ENOMEM; - goto err_free_netdev; - } - - err = otx2_check_pf_usable(pf); + err = otx2_init_rsrc(pdev, pf); if (err) goto err_free_netdev; - err = pci_alloc_irq_vectors(hw->pdev, RVU_PF_INT_VEC_CNT, - RVU_PF_INT_VEC_CNT, PCI_IRQ_MSIX); - if (err < 0) { - dev_err(dev, "%s: Failed to alloc %d IRQ vectors\n", - __func__, num_vec); - goto err_free_netdev; - } - - otx2_setup_dev_hw_settings(pf); - - /* Init PF <=> AF mailbox stuff */ - err = otx2_pfaf_mbox_init(pf); - if (err) - goto err_free_irq_vectors; - - /* Register mailbox interrupt */ - err = otx2_register_mbox_intr(pf, true); - if (err) - goto err_mbox_destroy; - - /* Request AF to attach NPA and NIX LFs to this PF. - * NIX and NPA LFs are needed for this PF to function as a NIC. - */ - err = otx2_attach_npa_nix(pf); - if (err) - goto err_disable_mbox_intr; - - err = otx2_realloc_msix_vectors(pf); - if (err) - goto err_detach_rsrc; - err = otx2_set_real_num_queues(netdev, hw->tx_queues, hw->rx_queues); if (err) goto err_detach_rsrc; - err = cn10k_lmtst_init(pf); - if (err) - goto err_detach_rsrc; - /* Assign default mac address */ otx2_get_mac_from_af(netdev); @@ -3058,6 +3157,7 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) netdev->min_mtu = OTX2_MIN_MTU; netdev->max_mtu = otx2_get_max_mtu(pf); + hw->max_mtu = netdev->max_mtu; /* reset CGX/RPM MAC stats */ otx2_reset_mac_stats(pf); @@ -3118,11 +3218,8 @@ err_detach_rsrc: if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) qmem_free(pf->dev, pf->dync_lmt); otx2_detach_resources(&pf->mbox); -err_disable_mbox_intr: otx2_disable_mbox_intr(pf); -err_mbox_destroy: otx2_pfaf_mbox_destroy(pf); -err_free_irq_vectors: pci_free_irq_vectors(hw->pdev); err_free_netdev: pci_set_drvdata(pdev, NULL); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c index e63cc1eb6d89..9a226ca74425 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c @@ -443,6 +443,7 @@ static int otx2_tc_parse_actions(struct otx2_nic *nic, struct flow_action_entry *act; struct net_device *target; struct otx2_nic *priv; + struct rep_dev *rdev; u32 burst, mark = 0; u8 nr_police = 0; u8 num_intf = 1; @@ -464,14 +465,18 @@ static int otx2_tc_parse_actions(struct otx2_nic *nic, return 0; case FLOW_ACTION_REDIRECT_INGRESS: target = act->dev; - priv = netdev_priv(target); - /* npc_install_flow_req doesn't support passing a target pcifunc */ - if (rvu_get_pf(nic->pcifunc) != rvu_get_pf(priv->pcifunc)) { - NL_SET_ERR_MSG_MOD(extack, - "can't redirect to other pf/vf"); - return -EOPNOTSUPP; + if (target->dev.parent) { + priv = netdev_priv(target); + if (rvu_get_pf(nic->pcifunc) != rvu_get_pf(priv->pcifunc)) { + NL_SET_ERR_MSG_MOD(extack, + "can't redirect to other pf/vf"); + return -EOPNOTSUPP; + } + req->vf = priv->pcifunc & RVU_PFVF_FUNC_MASK; + } else { + rdev = netdev_priv(target); + req->vf = rdev->pcifunc & RVU_PFVF_FUNC_MASK; } - req->vf = priv->pcifunc & RVU_PFVF_FUNC_MASK; /* if op is already set; avoid overwriting the same */ if (!req->op) @@ -1300,6 +1305,7 @@ static int otx2_tc_add_flow(struct otx2_nic *nic, req->channel = nic->hw.rx_chan_base; req->entry = flow_cfg->flow_ent[mcam_idx]; req->intf = NIX_INTF_RX; + req->vf = nic->pcifunc; req->set_cntr = 1; new_node->entry = req->entry; @@ -1400,8 +1406,8 @@ static int otx2_tc_get_flow_stats(struct otx2_nic *nic, return 0; } -static int otx2_setup_tc_cls_flower(struct otx2_nic *nic, - struct flow_cls_offload *cls_flower) +int otx2_setup_tc_cls_flower(struct otx2_nic *nic, + struct flow_cls_offload *cls_flower) { switch (cls_flower->command) { case FLOW_CLS_REPLACE: @@ -1414,6 +1420,7 @@ static int otx2_setup_tc_cls_flower(struct otx2_nic *nic, return -EOPNOTSUPP; } } +EXPORT_SYMBOL(otx2_setup_tc_cls_flower); static int otx2_tc_ingress_matchall_install(struct otx2_nic *nic, struct tc_cls_matchall_offload *cls) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c index 933e18ba2fb2..04bc06a80e23 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c @@ -376,9 +376,11 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf, } otx2_set_rxhash(pfvf, cqe, skb); - skb_record_rx_queue(skb, cq->cq_idx); - if (pfvf->netdev->features & NETIF_F_RXCSUM) - skb->ip_summed = CHECKSUM_UNNECESSARY; + if (!(pfvf->flags & OTX2_FLAG_REP_MODE_ENABLED)) { + skb_record_rx_queue(skb, cq->cq_idx); + if (pfvf->netdev->features & NETIF_F_RXCSUM) + skb->ip_summed = CHECKSUM_UNNECESSARY; + } if (pfvf->flags & OTX2_FLAG_TC_MARK_ENABLED) skb->mark = parse->match_id; @@ -453,6 +455,7 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf, int tx_pkts = 0, tx_bytes = 0, qidx; struct otx2_snd_queue *sq; struct nix_cqe_tx_s *cqe; + struct net_device *ndev; int processed_cqe = 0; if (cq->pend_cqe >= budget) @@ -493,6 +496,13 @@ process_cqe: otx2_write64(pfvf, NIX_LF_CQ_OP_DOOR, ((u64)cq->cq_idx << 32) | processed_cqe); +#if IS_ENABLED(CONFIG_RVU_ESWITCH) + if (pfvf->flags & OTX2_FLAG_REP_MODE_ENABLED) + ndev = pfvf->reps[qidx]->netdev; + else +#endif + ndev = pfvf->netdev; + if (likely(tx_pkts)) { struct netdev_queue *txq; @@ -500,12 +510,14 @@ process_cqe: if (qidx >= pfvf->hw.tx_queues) qidx -= pfvf->hw.xdp_queues; - txq = netdev_get_tx_queue(pfvf->netdev, qidx); + if (pfvf->flags & OTX2_FLAG_REP_MODE_ENABLED) + qidx = 0; + txq = netdev_get_tx_queue(ndev, qidx); netdev_tx_completed_queue(txq, tx_pkts, tx_bytes); /* Check if queue was stopped earlier due to ring full */ smp_mb(); if (netif_tx_queue_stopped(txq) && - netif_carrier_ok(pfvf->netdev)) + netif_carrier_ok(ndev)) netif_tx_wake_queue(txq); } return 0; @@ -527,7 +539,7 @@ static void otx2_adjust_adaptive_coalese(struct otx2_nic *pfvf, struct otx2_cq_p rx_frames + tx_frames, rx_bytes + tx_bytes, &dim_sample); - net_dim(&cq_poll->dim, dim_sample); + net_dim(&cq_poll->dim, &dim_sample); } int otx2_napi_handler(struct napi_struct *napi, int budget) @@ -594,6 +606,7 @@ int otx2_napi_handler(struct napi_struct *napi, int budget) } return workdone; } +EXPORT_SYMBOL(otx2_napi_handler); void otx2_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx) @@ -1141,13 +1154,13 @@ static void otx2_set_txtstamp(struct otx2_nic *pfvf, struct sk_buff *skb, } } -bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq, +bool otx2_sq_append_skb(void *dev, struct netdev_queue *txq, + struct otx2_snd_queue *sq, struct sk_buff *skb, u16 qidx) { - struct netdev_queue *txq = netdev_get_tx_queue(netdev, qidx); - struct otx2_nic *pfvf = netdev_priv(netdev); int offset, num_segs, free_desc; struct nix_sqe_hdr_s *sqe_hdr; + struct otx2_nic *pfvf = dev; /* Check if there is enough room between producer * and consumer index. diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h index 3f1d2655ff77..e1db5f961877 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h @@ -167,7 +167,8 @@ static inline u64 otx2_iova_to_phys(void *iommu_domain, dma_addr_t dma_addr) } int otx2_napi_handler(struct napi_struct *napi, int budget); -bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq, +bool otx2_sq_append_skb(void *dev, struct netdev_queue *txq, + struct otx2_snd_queue *sq, struct sk_buff *skb, u16 qidx); void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index 99fcc5661674..839fc77c11b2 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -21,6 +21,7 @@ static const struct pci_device_id otx2_vf_id_table[] = { { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AFVF) }, { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF) }, + { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_SDP_REP) }, { } }; @@ -371,7 +372,7 @@ static int otx2vf_open(struct net_device *netdev) /* LBKs do not receive link events so tell everyone we are up here */ vf = netdev_priv(netdev); - if (is_otx2_lbkvf(vf->pdev)) { + if (is_otx2_lbkvf(vf->pdev) || is_otx2_sdp_rep(vf->pdev)) { pr_info("%s NIC Link is UP\n", netdev->name); netif_carrier_on(netdev); netif_tx_start_all_queues(netdev); @@ -395,7 +396,7 @@ static netdev_tx_t otx2vf_xmit(struct sk_buff *skb, struct net_device *netdev) sq = &vf->qset.sq[qidx]; txq = netdev_get_tx_queue(netdev, qidx); - if (!otx2_sq_append_skb(netdev, sq, skb, qidx)) { + if (!otx2_sq_append_skb(vf, txq, sq, skb, qidx)) { netif_tx_stop_queue(txq); /* Check again, incase SQBs got freed up */ @@ -500,7 +501,7 @@ static const struct net_device_ops otx2vf_netdev_ops = { .ndo_setup_tc = otx2_setup_tc, }; -static int otx2_wq_init(struct otx2_nic *vf) +static int otx2_vf_wq_init(struct otx2_nic *vf) { vf->otx2_wq = create_singlethread_workqueue("otx2vf_wq"); if (!vf->otx2_wq) @@ -671,6 +672,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id) netdev->min_mtu = OTX2_MIN_MTU; netdev->max_mtu = otx2_get_max_mtu(vf); + hw->max_mtu = netdev->max_mtu; /* To distinguish, for LBK VFs set netdev name explicitly */ if (is_otx2_lbkvf(vf->pdev)) { @@ -682,13 +684,22 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id) snprintf(netdev->name, sizeof(netdev->name), "lbk%d", n); } + if (is_otx2_sdp_rep(vf->pdev)) { + int n; + + n = vf->pcifunc & RVU_PFVF_FUNC_MASK; + n -= 1; + snprintf(netdev->name, sizeof(netdev->name), "sdp%d-%d", + pdev->bus->number, n); + } + err = register_netdev(netdev); if (err) { dev_err(dev, "Failed to register netdevice\n"); goto err_ptp_destroy; } - err = otx2_wq_init(vf); + err = otx2_vf_wq_init(vf); if (err) goto err_unreg_netdev; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c new file mode 100644 index 000000000000..232b10740c13 --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c @@ -0,0 +1,864 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell RVU representor driver + * + * Copyright (C) 2024 Marvell. + * + */ + +#include <linux/etherdevice.h> +#include <linux/module.h> +#include <linux/pci.h> +#include <linux/net_tstamp.h> +#include <linux/sort.h> + +#include "otx2_common.h" +#include "cn10k.h" +#include "otx2_reg.h" +#include "rep.h" + +#define DRV_NAME "rvu_rep" +#define DRV_STRING "Marvell RVU Representor Driver" + +static const struct pci_device_id rvu_rep_id_table[] = { + { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_RVU_REP) }, + { } +}; + +MODULE_AUTHOR("Marvell International Ltd."); +MODULE_DESCRIPTION(DRV_STRING); +MODULE_LICENSE("GPL"); +MODULE_DEVICE_TABLE(pci, rvu_rep_id_table); + +static int rvu_rep_notify_pfvf(struct otx2_nic *priv, u16 event, + struct rep_event *data); + +static int rvu_rep_mcam_flow_init(struct rep_dev *rep) +{ + struct npc_mcam_alloc_entry_req *req; + struct npc_mcam_alloc_entry_rsp *rsp; + struct otx2_nic *priv = rep->mdev; + int ent, allocated = 0; + int count; + + rep->flow_cfg = kcalloc(1, sizeof(struct otx2_flow_config), GFP_KERNEL); + + if (!rep->flow_cfg) + return -ENOMEM; + + count = OTX2_DEFAULT_FLOWCOUNT; + + rep->flow_cfg->flow_ent = kcalloc(count, sizeof(u16), GFP_KERNEL); + if (!rep->flow_cfg->flow_ent) + return -ENOMEM; + + while (allocated < count) { + req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(&priv->mbox); + if (!req) + goto exit; + + req->hdr.pcifunc = rep->pcifunc; + req->contig = false; + req->ref_entry = 0; + req->count = (count - allocated) > NPC_MAX_NONCONTIG_ENTRIES ? + NPC_MAX_NONCONTIG_ENTRIES : count - allocated; + + if (otx2_sync_mbox_msg(&priv->mbox)) + goto exit; + + rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp + (&priv->mbox.mbox, 0, &req->hdr); + + for (ent = 0; ent < rsp->count; ent++) + rep->flow_cfg->flow_ent[ent + allocated] = rsp->entry_list[ent]; + + allocated += rsp->count; + + if (rsp->count != req->count) + break; + } +exit: + /* Multiple MCAM entry alloc requests could result in non-sequential + * MCAM entries in the flow_ent[] array. Sort them in an ascending + * order, otherwise user installed ntuple filter index and MCAM entry + * index will not be in sync. + */ + if (allocated) + sort(&rep->flow_cfg->flow_ent[0], allocated, + sizeof(rep->flow_cfg->flow_ent[0]), mcam_entry_cmp, NULL); + + mutex_unlock(&priv->mbox.lock); + + rep->flow_cfg->max_flows = allocated; + + if (allocated) { + rep->flags |= OTX2_FLAG_MCAM_ENTRIES_ALLOC; + rep->flags |= OTX2_FLAG_NTUPLE_SUPPORT; + rep->flags |= OTX2_FLAG_TC_FLOWER_SUPPORT; + } + + INIT_LIST_HEAD(&rep->flow_cfg->flow_list); + INIT_LIST_HEAD(&rep->flow_cfg->flow_list_tc); + return 0; +} + +static int rvu_rep_setup_tc_cb(enum tc_setup_type type, + void *type_data, void *cb_priv) +{ + struct rep_dev *rep = cb_priv; + struct otx2_nic *priv = rep->mdev; + + if (!(rep->flags & RVU_REP_VF_INITIALIZED)) + return -EINVAL; + + if (!(rep->flags & OTX2_FLAG_TC_FLOWER_SUPPORT)) + rvu_rep_mcam_flow_init(rep); + + priv->netdev = rep->netdev; + priv->flags = rep->flags; + priv->pcifunc = rep->pcifunc; + priv->flow_cfg = rep->flow_cfg; + + switch (type) { + case TC_SETUP_CLSFLOWER: + return otx2_setup_tc_cls_flower(priv, type_data); + default: + return -EOPNOTSUPP; + } +} + +static LIST_HEAD(rvu_rep_block_cb_list); +static int rvu_rep_setup_tc(struct net_device *netdev, enum tc_setup_type type, + void *type_data) +{ + struct rvu_rep *rep = netdev_priv(netdev); + + switch (type) { + case TC_SETUP_BLOCK: + return flow_block_cb_setup_simple(type_data, + &rvu_rep_block_cb_list, + rvu_rep_setup_tc_cb, + rep, rep, true); + default: + return -EOPNOTSUPP; + } +} + +static int +rvu_rep_sp_stats64(const struct net_device *dev, + struct rtnl_link_stats64 *stats) +{ + struct rep_dev *rep = netdev_priv(dev); + struct otx2_nic *priv = rep->mdev; + struct otx2_rcv_queue *rq; + struct otx2_snd_queue *sq; + u16 qidx = rep->rep_id; + + otx2_update_rq_stats(priv, qidx); + rq = &priv->qset.rq[qidx]; + + otx2_update_sq_stats(priv, qidx); + sq = &priv->qset.sq[qidx]; + + stats->tx_bytes = sq->stats.bytes; + stats->tx_packets = sq->stats.pkts; + stats->rx_bytes = rq->stats.bytes; + stats->rx_packets = rq->stats.pkts; + return 0; +} + +static bool +rvu_rep_has_offload_stats(const struct net_device *dev, int attr_id) +{ + return attr_id == IFLA_OFFLOAD_XSTATS_CPU_HIT; +} + +static int +rvu_rep_get_offload_stats(int attr_id, const struct net_device *dev, + void *sp) +{ + if (attr_id == IFLA_OFFLOAD_XSTATS_CPU_HIT) + return rvu_rep_sp_stats64(dev, (struct rtnl_link_stats64 *)sp); + + return -EINVAL; +} + +static int rvu_rep_dl_port_fn_hw_addr_get(struct devlink_port *port, + u8 *hw_addr, int *hw_addr_len, + struct netlink_ext_ack *extack) +{ + struct rep_dev *rep = container_of(port, struct rep_dev, dl_port); + + ether_addr_copy(hw_addr, rep->mac); + *hw_addr_len = ETH_ALEN; + return 0; +} + +static int rvu_rep_dl_port_fn_hw_addr_set(struct devlink_port *port, + const u8 *hw_addr, int hw_addr_len, + struct netlink_ext_ack *extack) +{ + struct rep_dev *rep = container_of(port, struct rep_dev, dl_port); + struct otx2_nic *priv = rep->mdev; + struct rep_event evt = {0}; + + eth_hw_addr_set(rep->netdev, hw_addr); + ether_addr_copy(rep->mac, hw_addr); + + ether_addr_copy(evt.evt_data.mac, hw_addr); + evt.pcifunc = rep->pcifunc; + rvu_rep_notify_pfvf(priv, RVU_EVENT_MAC_ADDR_CHANGE, &evt); + return 0; +} + +static const struct devlink_port_ops rvu_rep_dl_port_ops = { + .port_fn_hw_addr_get = rvu_rep_dl_port_fn_hw_addr_get, + .port_fn_hw_addr_set = rvu_rep_dl_port_fn_hw_addr_set, +}; + +static void +rvu_rep_devlink_set_switch_id(struct otx2_nic *priv, + struct netdev_phys_item_id *ppid) +{ + struct pci_dev *pdev = priv->pdev; + u64 id; + + id = pci_get_dsn(pdev); + + ppid->id_len = sizeof(id); + put_unaligned_be64(id, &ppid->id); +} + +static void rvu_rep_devlink_port_unregister(struct rep_dev *rep) +{ + devlink_port_unregister(&rep->dl_port); +} + +static int rvu_rep_devlink_port_register(struct rep_dev *rep) +{ + struct devlink_port_attrs attrs = {}; + struct otx2_nic *priv = rep->mdev; + struct devlink *dl = priv->dl->dl; + int err; + + if (!(rep->pcifunc & RVU_PFVF_FUNC_MASK)) { + attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL; + attrs.phys.port_number = rvu_get_pf(rep->pcifunc); + } else { + attrs.flavour = DEVLINK_PORT_FLAVOUR_PCI_VF; + attrs.pci_vf.pf = rvu_get_pf(rep->pcifunc); + attrs.pci_vf.vf = rep->pcifunc & RVU_PFVF_FUNC_MASK; + } + + rvu_rep_devlink_set_switch_id(priv, &attrs.switch_id); + devlink_port_attrs_set(&rep->dl_port, &attrs); + + err = devl_port_register_with_ops(dl, &rep->dl_port, rep->rep_id, + &rvu_rep_dl_port_ops); + if (err) { + dev_err(rep->mdev->dev, "devlink_port_register failed: %d\n", + err); + return err; + } + return 0; +} + +static int rvu_rep_get_repid(struct otx2_nic *priv, u16 pcifunc) +{ + int rep_id; + + for (rep_id = 0; rep_id < priv->rep_cnt; rep_id++) + if (priv->rep_pf_map[rep_id] == pcifunc) + return rep_id; + return -EINVAL; +} + +static int rvu_rep_notify_pfvf(struct otx2_nic *priv, u16 event, + struct rep_event *data) +{ + struct rep_event *req; + + mutex_lock(&priv->mbox.lock); + req = otx2_mbox_alloc_msg_rep_event_notify(&priv->mbox); + if (!req) { + mutex_unlock(&priv->mbox.lock); + return -ENOMEM; + } + req->event = event; + req->pcifunc = data->pcifunc; + + memcpy(&req->evt_data, &data->evt_data, sizeof(struct rep_evt_data)); + otx2_sync_mbox_msg(&priv->mbox); + mutex_unlock(&priv->mbox.lock); + return 0; +} + +static void rvu_rep_state_evt_handler(struct otx2_nic *priv, + struct rep_event *info) +{ + struct rep_dev *rep; + int rep_id; + + rep_id = rvu_rep_get_repid(priv, info->pcifunc); + rep = priv->reps[rep_id]; + if (info->evt_data.vf_state) + rep->flags |= RVU_REP_VF_INITIALIZED; + else + rep->flags &= ~RVU_REP_VF_INITIALIZED; +} + +int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info) +{ + if (info->event & RVU_EVENT_PFVF_STATE) + rvu_rep_state_evt_handler(pf, info); + return 0; +} + +static int rvu_rep_change_mtu(struct net_device *dev, int new_mtu) +{ + struct rep_dev *rep = netdev_priv(dev); + struct otx2_nic *priv = rep->mdev; + struct rep_event evt = {0}; + + netdev_info(dev, "Changing MTU from %d to %d\n", + dev->mtu, new_mtu); + dev->mtu = new_mtu; + + evt.evt_data.mtu = new_mtu; + evt.pcifunc = rep->pcifunc; + rvu_rep_notify_pfvf(priv, RVU_EVENT_MTU_CHANGE, &evt); + return 0; +} + +static void rvu_rep_get_stats(struct work_struct *work) +{ + struct delayed_work *del_work = to_delayed_work(work); + struct nix_stats_req *req; + struct nix_stats_rsp *rsp; + struct rep_stats *stats; + struct otx2_nic *priv; + struct rep_dev *rep; + int err; + + rep = container_of(del_work, struct rep_dev, stats_wrk); + priv = rep->mdev; + + mutex_lock(&priv->mbox.lock); + req = otx2_mbox_alloc_msg_nix_lf_stats(&priv->mbox); + if (!req) { + mutex_unlock(&priv->mbox.lock); + return; + } + req->pcifunc = rep->pcifunc; + err = otx2_sync_mbox_msg_busy_poll(&priv->mbox); + if (err) + goto exit; + + rsp = (struct nix_stats_rsp *) + otx2_mbox_get_rsp(&priv->mbox.mbox, 0, &req->hdr); + + if (IS_ERR(rsp)) { + err = PTR_ERR(rsp); + goto exit; + } + + stats = &rep->stats; + stats->rx_bytes = rsp->rx.octs; + stats->rx_frames = rsp->rx.ucast + rsp->rx.bcast + + rsp->rx.mcast; + stats->rx_drops = rsp->rx.drop; + stats->rx_mcast_frames = rsp->rx.mcast; + stats->tx_bytes = rsp->tx.octs; + stats->tx_frames = rsp->tx.ucast + rsp->tx.bcast + rsp->tx.mcast; + stats->tx_drops = rsp->tx.drop; +exit: + mutex_unlock(&priv->mbox.lock); +} + +static void rvu_rep_get_stats64(struct net_device *dev, + struct rtnl_link_stats64 *stats) +{ + struct rep_dev *rep = netdev_priv(dev); + + if (!(rep->flags & RVU_REP_VF_INITIALIZED)) + return; + + stats->rx_packets = rep->stats.rx_frames; + stats->rx_bytes = rep->stats.rx_bytes; + stats->rx_dropped = rep->stats.rx_drops; + stats->multicast = rep->stats.rx_mcast_frames; + + stats->tx_packets = rep->stats.tx_frames; + stats->tx_bytes = rep->stats.tx_bytes; + stats->tx_dropped = rep->stats.tx_drops; + + schedule_delayed_work(&rep->stats_wrk, msecs_to_jiffies(100)); +} + +static int rvu_eswitch_config(struct otx2_nic *priv, u8 ena) +{ + struct esw_cfg_req *req; + + mutex_lock(&priv->mbox.lock); + req = otx2_mbox_alloc_msg_esw_cfg(&priv->mbox); + if (!req) { + mutex_unlock(&priv->mbox.lock); + return -ENOMEM; + } + req->ena = ena; + otx2_sync_mbox_msg(&priv->mbox); + mutex_unlock(&priv->mbox.lock); + return 0; +} + +static netdev_tx_t rvu_rep_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct rep_dev *rep = netdev_priv(dev); + struct otx2_nic *pf = rep->mdev; + struct otx2_snd_queue *sq; + struct netdev_queue *txq; + + sq = &pf->qset.sq[rep->rep_id]; + txq = netdev_get_tx_queue(dev, 0); + + if (!otx2_sq_append_skb(pf, txq, sq, skb, rep->rep_id)) { + netif_tx_stop_queue(txq); + + /* Check again, in case SQBs got freed up */ + smp_mb(); + if (((sq->num_sqbs - *sq->aura_fc_addr) * sq->sqe_per_sqb) + > sq->sqe_thresh) + netif_tx_wake_queue(txq); + + return NETDEV_TX_BUSY; + } + return NETDEV_TX_OK; +} + +static int rvu_rep_open(struct net_device *dev) +{ + struct rep_dev *rep = netdev_priv(dev); + struct otx2_nic *priv = rep->mdev; + struct rep_event evt = {0}; + + if (!(rep->flags & RVU_REP_VF_INITIALIZED)) + return 0; + + netif_carrier_on(dev); + netif_tx_start_all_queues(dev); + + evt.event = RVU_EVENT_PORT_STATE; + evt.evt_data.port_state = 1; + evt.pcifunc = rep->pcifunc; + rvu_rep_notify_pfvf(priv, RVU_EVENT_PORT_STATE, &evt); + return 0; +} + +static int rvu_rep_stop(struct net_device *dev) +{ + struct rep_dev *rep = netdev_priv(dev); + struct otx2_nic *priv = rep->mdev; + struct rep_event evt = {0}; + + if (!(rep->flags & RVU_REP_VF_INITIALIZED)) + return 0; + + netif_carrier_off(dev); + netif_tx_disable(dev); + + evt.event = RVU_EVENT_PORT_STATE; + evt.pcifunc = rep->pcifunc; + rvu_rep_notify_pfvf(priv, RVU_EVENT_PORT_STATE, &evt); + return 0; +} + +static const struct net_device_ops rvu_rep_netdev_ops = { + .ndo_open = rvu_rep_open, + .ndo_stop = rvu_rep_stop, + .ndo_start_xmit = rvu_rep_xmit, + .ndo_get_stats64 = rvu_rep_get_stats64, + .ndo_change_mtu = rvu_rep_change_mtu, + .ndo_has_offload_stats = rvu_rep_has_offload_stats, + .ndo_get_offload_stats = rvu_rep_get_offload_stats, + .ndo_setup_tc = rvu_rep_setup_tc, +}; + +static int rvu_rep_napi_init(struct otx2_nic *priv, + struct netlink_ext_ack *extack) +{ + struct otx2_qset *qset = &priv->qset; + struct otx2_cq_poll *cq_poll = NULL; + struct otx2_hw *hw = &priv->hw; + int err = 0, qidx, vec; + char *irq_name; + + qset->napi = kcalloc(hw->cint_cnt, sizeof(*cq_poll), GFP_KERNEL); + if (!qset->napi) + return -ENOMEM; + + /* Register NAPI handler */ + for (qidx = 0; qidx < hw->cint_cnt; qidx++) { + cq_poll = &qset->napi[qidx]; + cq_poll->cint_idx = qidx; + cq_poll->cq_ids[CQ_RX] = + (qidx < hw->rx_queues) ? qidx : CINT_INVALID_CQ; + cq_poll->cq_ids[CQ_TX] = (qidx < hw->tx_queues) ? + qidx + hw->rx_queues : + CINT_INVALID_CQ; + cq_poll->cq_ids[CQ_XDP] = CINT_INVALID_CQ; + cq_poll->cq_ids[CQ_QOS] = CINT_INVALID_CQ; + + cq_poll->dev = (void *)priv; + netif_napi_add(priv->reps[qidx]->netdev, &cq_poll->napi, + otx2_napi_handler); + napi_enable(&cq_poll->napi); + } + /* Register CQ IRQ handlers */ + vec = hw->nix_msixoff + NIX_LF_CINT_VEC_START; + for (qidx = 0; qidx < hw->cint_cnt; qidx++) { + irq_name = &hw->irq_name[vec * NAME_SIZE]; + + snprintf(irq_name, NAME_SIZE, "rep%d-rxtx-%d", qidx, qidx); + + err = request_irq(pci_irq_vector(priv->pdev, vec), + otx2_cq_intr_handler, 0, irq_name, + &qset->napi[qidx]); + if (err) { + NL_SET_ERR_MSG_FMT_MOD(extack, + "RVU REP IRQ registration failed for CQ%d", + qidx); + goto err_free_cints; + } + vec++; + + /* Enable CQ IRQ */ + otx2_write64(priv, NIX_LF_CINTX_INT(qidx), BIT_ULL(0)); + otx2_write64(priv, NIX_LF_CINTX_ENA_W1S(qidx), BIT_ULL(0)); + } + priv->flags &= ~OTX2_FLAG_INTF_DOWN; + return 0; + +err_free_cints: + otx2_free_cints(priv, qidx); + otx2_disable_napi(priv); + return err; +} + +static void rvu_rep_free_cq_rsrc(struct otx2_nic *priv) +{ + struct otx2_qset *qset = &priv->qset; + struct otx2_cq_poll *cq_poll = NULL; + int qidx, vec; + + /* Cleanup CQ NAPI and IRQ */ + vec = priv->hw.nix_msixoff + NIX_LF_CINT_VEC_START; + for (qidx = 0; qidx < priv->hw.cint_cnt; qidx++) { + /* Disable interrupt */ + otx2_write64(priv, NIX_LF_CINTX_ENA_W1C(qidx), BIT_ULL(0)); + + synchronize_irq(pci_irq_vector(priv->pdev, vec)); + + cq_poll = &qset->napi[qidx]; + napi_synchronize(&cq_poll->napi); + vec++; + } + otx2_free_cints(priv, priv->hw.cint_cnt); + otx2_disable_napi(priv); +} + +static void rvu_rep_rsrc_free(struct otx2_nic *priv) +{ + struct otx2_qset *qset = &priv->qset; + struct delayed_work *work; + int wrk; + + for (wrk = 0; wrk < priv->qset.cq_cnt; wrk++) { + work = &priv->refill_wrk[wrk].pool_refill_work; + cancel_delayed_work_sync(work); + } + devm_kfree(priv->dev, priv->refill_wrk); + + otx2_free_hw_resources(priv); + otx2_free_queue_mem(qset); +} + +static int rvu_rep_rsrc_init(struct otx2_nic *priv) +{ + struct otx2_qset *qset = &priv->qset; + int err; + + err = otx2_alloc_queue_mem(priv); + if (err) + return err; + + priv->hw.max_mtu = otx2_get_max_mtu(priv); + priv->tx_max_pktlen = priv->hw.max_mtu + OTX2_ETH_HLEN; + priv->rbsize = ALIGN(priv->hw.rbuf_len, OTX2_ALIGN) + OTX2_HEAD_ROOM; + + err = otx2_init_hw_resources(priv); + if (err) + goto err_free_rsrc; + + /* Set maximum frame size allowed in HW */ + err = otx2_hw_set_mtu(priv, priv->hw.max_mtu); + if (err) { + dev_err(priv->dev, "Failed to set HW MTU\n"); + goto err_free_rsrc; + } + return 0; + +err_free_rsrc: + otx2_free_hw_resources(priv); + otx2_free_queue_mem(qset); + return err; +} + +void rvu_rep_destroy(struct otx2_nic *priv) +{ + struct rep_dev *rep; + int rep_id; + + rvu_eswitch_config(priv, false); + priv->flags |= OTX2_FLAG_INTF_DOWN; + rvu_rep_free_cq_rsrc(priv); + for (rep_id = 0; rep_id < priv->rep_cnt; rep_id++) { + rep = priv->reps[rep_id]; + unregister_netdev(rep->netdev); + rvu_rep_devlink_port_unregister(rep); + free_netdev(rep->netdev); + kfree(rep->flow_cfg); + } + kfree(priv->reps); + rvu_rep_rsrc_free(priv); +} + +int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack) +{ + int rep_cnt = priv->rep_cnt; + struct net_device *ndev; + struct rep_dev *rep; + int rep_id, err; + u16 pcifunc; + + err = rvu_rep_rsrc_init(priv); + if (err) + return -ENOMEM; + + priv->reps = kcalloc(rep_cnt, sizeof(struct rep_dev *), GFP_KERNEL); + if (!priv->reps) + return -ENOMEM; + + for (rep_id = 0; rep_id < rep_cnt; rep_id++) { + ndev = alloc_etherdev(sizeof(*rep)); + if (!ndev) { + NL_SET_ERR_MSG_FMT_MOD(extack, + "PFVF representor:%d creation failed", + rep_id); + err = -ENOMEM; + goto exit; + } + + rep = netdev_priv(ndev); + priv->reps[rep_id] = rep; + rep->mdev = priv; + rep->netdev = ndev; + rep->rep_id = rep_id; + + ndev->min_mtu = OTX2_MIN_MTU; + ndev->max_mtu = priv->hw.max_mtu; + ndev->netdev_ops = &rvu_rep_netdev_ops; + pcifunc = priv->rep_pf_map[rep_id]; + rep->pcifunc = pcifunc; + + snprintf(ndev->name, sizeof(ndev->name), "Rpf%dvf%d", + rvu_get_pf(pcifunc), (pcifunc & RVU_PFVF_FUNC_MASK)); + + ndev->hw_features = (NETIF_F_RXCSUM | NETIF_F_IP_CSUM | + NETIF_F_IPV6_CSUM | NETIF_F_RXHASH | + NETIF_F_SG | NETIF_F_TSO | NETIF_F_TSO6); + + ndev->hw_features |= NETIF_F_HW_TC; + ndev->features |= ndev->hw_features; + eth_hw_addr_random(ndev); + err = rvu_rep_devlink_port_register(rep); + if (err) + goto exit; + + SET_NETDEV_DEVLINK_PORT(ndev, &rep->dl_port); + err = register_netdev(ndev); + if (err) { + NL_SET_ERR_MSG_MOD(extack, + "PFVF representor registration failed"); + free_netdev(ndev); + goto exit; + } + + INIT_DELAYED_WORK(&rep->stats_wrk, rvu_rep_get_stats); + } + err = rvu_rep_napi_init(priv, extack); + if (err) + goto exit; + + rvu_eswitch_config(priv, true); + return 0; +exit: + while (--rep_id >= 0) { + rep = priv->reps[rep_id]; + unregister_netdev(rep->netdev); + rvu_rep_devlink_port_unregister(rep); + free_netdev(rep->netdev); + } + kfree(priv->reps); + rvu_rep_rsrc_free(priv); + return err; +} + +static int rvu_get_rep_cnt(struct otx2_nic *priv) +{ + struct get_rep_cnt_rsp *rsp; + struct mbox_msghdr *msghdr; + struct msg_req *req; + int err, rep; + + mutex_lock(&priv->mbox.lock); + req = otx2_mbox_alloc_msg_get_rep_cnt(&priv->mbox); + if (!req) { + mutex_unlock(&priv->mbox.lock); + return -ENOMEM; + } + err = otx2_sync_mbox_msg(&priv->mbox); + if (err) + goto exit; + + msghdr = otx2_mbox_get_rsp(&priv->mbox.mbox, 0, &req->hdr); + if (IS_ERR(msghdr)) { + err = PTR_ERR(msghdr); + goto exit; + } + + rsp = (struct get_rep_cnt_rsp *)msghdr; + priv->hw.tx_queues = rsp->rep_cnt; + priv->hw.rx_queues = rsp->rep_cnt; + priv->rep_cnt = rsp->rep_cnt; + for (rep = 0; rep < priv->rep_cnt; rep++) + priv->rep_pf_map[rep] = rsp->rep_pf_map[rep]; + +exit: + mutex_unlock(&priv->mbox.lock); + return err; +} + +static int rvu_rep_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct device *dev = &pdev->dev; + struct otx2_nic *priv; + struct otx2_hw *hw; + int err; + + err = pcim_enable_device(pdev); + if (err) { + dev_err(dev, "Failed to enable PCI device\n"); + return err; + } + + err = pci_request_regions(pdev, DRV_NAME); + if (err) { + dev_err(dev, "PCI request regions failed 0x%x\n", err); + return err; + } + + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48)); + if (err) { + dev_err(dev, "DMA mask config failed, abort\n"); + goto err_release_regions; + } + + pci_set_master(pdev); + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) { + err = -ENOMEM; + goto err_release_regions; + } + + pci_set_drvdata(pdev, priv); + priv->pdev = pdev; + priv->dev = dev; + priv->flags |= OTX2_FLAG_INTF_DOWN; + priv->flags |= OTX2_FLAG_REP_MODE_ENABLED; + + hw = &priv->hw; + hw->pdev = pdev; + hw->max_queues = OTX2_MAX_CQ_CNT; + hw->rbuf_len = OTX2_DEFAULT_RBUF_LEN; + hw->xqe_size = 128; + + err = otx2_init_rsrc(pdev, priv); + if (err) + goto err_release_regions; + + priv->iommu_domain = iommu_get_domain_for_dev(dev); + + err = rvu_get_rep_cnt(priv); + if (err) + goto err_detach_rsrc; + + err = otx2_register_dl(priv); + if (err) + goto err_detach_rsrc; + + return 0; + +err_detach_rsrc: + if (priv->hw.lmt_info) + free_percpu(priv->hw.lmt_info); + if (test_bit(CN10K_LMTST, &priv->hw.cap_flag)) + qmem_free(priv->dev, priv->dync_lmt); + otx2_detach_resources(&priv->mbox); + otx2_disable_mbox_intr(priv); + otx2_pfaf_mbox_destroy(priv); + pci_free_irq_vectors(pdev); +err_release_regions: + pci_set_drvdata(pdev, NULL); + pci_release_regions(pdev); + return err; +} + +static void rvu_rep_remove(struct pci_dev *pdev) +{ + struct otx2_nic *priv = pci_get_drvdata(pdev); + + otx2_unregister_dl(priv); + if (!(priv->flags & OTX2_FLAG_INTF_DOWN)) + rvu_rep_destroy(priv); + otx2_detach_resources(&priv->mbox); + if (priv->hw.lmt_info) + free_percpu(priv->hw.lmt_info); + if (test_bit(CN10K_LMTST, &priv->hw.cap_flag)) + qmem_free(priv->dev, priv->dync_lmt); + otx2_disable_mbox_intr(priv); + otx2_pfaf_mbox_destroy(priv); + pci_free_irq_vectors(priv->pdev); + pci_set_drvdata(pdev, NULL); + pci_release_regions(pdev); +} + +static struct pci_driver rvu_rep_driver = { + .name = DRV_NAME, + .id_table = rvu_rep_id_table, + .probe = rvu_rep_probe, + .remove = rvu_rep_remove, + .shutdown = rvu_rep_remove, +}; + +static int __init rvu_rep_init_module(void) +{ + return pci_register_driver(&rvu_rep_driver); +} + +static void __exit rvu_rep_cleanup_module(void) +{ + pci_unregister_driver(&rvu_rep_driver); +} + +module_init(rvu_rep_init_module); +module_exit(rvu_rep_cleanup_module); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h new file mode 100644 index 000000000000..38446b3e4f13 --- /dev/null +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Marvell RVU REPRESENTOR driver + * + * Copyright (C) 2024 Marvell. + * + */ + +#ifndef REP_H +#define REP_H + +#include <linux/pci.h> + +#include "otx2_reg.h" +#include "otx2_txrx.h" +#include "otx2_common.h" + +#define PCI_DEVID_RVU_REP 0xA0E0 + +#define RVU_MAX_REP OTX2_MAX_CQ_CNT + +struct rep_stats { + u64 rx_bytes; + u64 rx_frames; + u64 rx_drops; + u64 rx_mcast_frames; + + u64 tx_bytes; + u64 tx_frames; + u64 tx_drops; +}; + +struct rep_dev { + struct otx2_nic *mdev; + struct net_device *netdev; + struct rep_stats stats; + struct delayed_work stats_wrk; + struct devlink_port dl_port; + struct otx2_flow_config *flow_cfg; +#define RVU_REP_VF_INITIALIZED BIT_ULL(0) + u64 flags; + u16 rep_id; + u16 pcifunc; + u8 mac[ETH_ALEN]; +}; + +static inline bool otx2_rep_dev(struct pci_dev *pdev) +{ + return pdev->device == PCI_DEVID_RVU_REP; +} + +int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack); +void rvu_rep_destroy(struct otx2_nic *priv); +int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info); +#endif /* REP_H */ |