summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2019-07-09net: flow_offload: rename tc_cls_flower_offload to flow_cls_offloadPablo Neira Ayuso
And any other existing fields in this structure that refer to tc. Specifically: * tc_cls_flower_offload_flow_rule() to flow_cls_offload_flow_rule(). * TC_CLSFLOWER_* to FLOW_CLS_*. * tc_cls_common_offload to tc_cls_common_offload. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: add flow_block_cb_is_busy() and use itPablo Neira Ayuso
This patch adds a function to check if flow block callback is already in use. Call this new function from flow_block_cb_setup_simple() and from drivers. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09drivers: net: use flow block APIPablo Neira Ayuso
This patch updates flow_block_cb_setup_simple() to use the flow block API. Several drivers are also adjusted to use it. This patch introduces the per-driver list of flow blocks to account for blocks that are already in use. Remove tc_block_offload alias. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: rename TCF_BLOCK_BINDER_TYPE_* to FLOW_BLOCK_BINDER_TYPE_*Pablo Neira Ayuso
Rename from TCF_BLOCK_BINDER_TYPE_* to FLOW_BLOCK_BINDER_TYPE_* and remove temporary tcf_block_binder_type alias. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: rename TC_BLOCK_{UN}BIND to FLOW_BLOCK_{UN}BINDPablo Neira Ayuso
Rename from TC_BLOCK_{UN}BIND to FLOW_BLOCK_{UN}BIND and remove temporary tc_block_command alias. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: flow_offload: add flow_block_cb_setup_simple()Pablo Neira Ayuso
Most drivers do the same thing to set up the flow block callbacks, this patch adds a helper function to do this. This preparation patch reduces the number of changes to adapt the existing drivers to use the flow block callback API. This new helper function takes a flow block list per-driver, which is set to NULL until this driver list is used. This patch also introduces the flow_block_command and flow_block_binder_type enumerations, which are renamed to use FLOW_BLOCK_* in follow up patches. There are three definitions (aliases) in order to reduce the number of updates in this patch, which go away once drivers are fully adapted to use this flow block API. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Add an tx_desc to adapt HI13X1_GMACJiangfeng Xiao
HI13X1 changed the offsets and bitmaps for tx_desc registers in the same peripheral device on different models of the hip04_eth. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Add an rx_desc to adapt HI13X1_GMACJiangfeng Xiao
HI13X1 changed the offsets and bitmaps for rx_desc registers in the same peripheral device on different models of the hip04_eth. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Offset buf address to adapt HI13X1_GMACJiangfeng Xiao
The buf unit size of HI13X1_GMAC is cache_line_size, which is 64, so the address we write to the buf register needs to be shifted right by 6 bits. The 31st bit of the PPE_CFG_CPU_ADD_ADDR register of HI13X1_GMAC indicates whether to release the buffer of the message, and the low indicates that it is valid. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Add group field to adapt HI13X1_GMACJiangfeng Xiao
In general, group is the same as the port, but some boards specify a special group for better load balancing of each processing unit. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: HI13X1_GMAX need dreq reset at firstJiangfeng Xiao
HI13X1_GMAC delete request for soft reset at first, otherwise, the subsequent initialization will not take effect. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: HI13X1_GMAX skip write LOCAL_PAGE_REGJiangfeng Xiao
HI13X1_GMAC changed the offsets and bitmaps for GE_TX_LOCAL_PAGE_REG registers in the same peripheral device on different models of the hip04_eth. With the default configuration, HI13X1_GMAC can also work without any writes to the GE_TX_LOCAL_PAGE_REG register. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Cleanup for cast to restricted __be32Jiangfeng Xiao
This patch fixes the following warning from sparse: hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Cleanup for got restricted __be32Jiangfeng Xiao
This patch fixes the following warning from sparse: hip04_eth.c:468:25: warning: incorrect type in assignment hip04_eth.c:468:25: expected unsigned int [usertype] send_addr hip04_eth.c:468:25: got restricted __be32 [usertype] hip04_eth.c:469:25: warning: incorrect type in assignment hip04_eth.c:469:25: expected unsigned int [usertype] send_size hip04_eth.c:469:25: got restricted __be32 [usertype] hip04_eth.c:470:19: warning: incorrect type in assignment hip04_eth.c:470:19: expected unsigned int [usertype] cfg hip04_eth.c:470:19: got restricted __be32 [usertype] hip04_eth.c:472:23: warning: incorrect type in assignment hip04_eth.c:472:23: expected unsigned int [usertype] wb_addr hip04_eth.c:472:23: got restricted __be32 [usertype] Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: hisilicon: Add support for HI13X1 to hip04_ethJiangfeng Xiao
Extend the hip04_eth driver to support HI13X1_GMAC. Enable it with CONFIG_HI13X1_GMAC option. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: add support for hash table size 128/256 in dwmac4Biao Huang
1. get hash table size in hw feature reigster, and add support for taller hash table(128/256) in dwmac4. 2. only clear GMAC_PACKET_FILTER bits used in this function, to avoid side effect to functions of other bits. stmmac selftests output log with flow control on: ethtool -t eth0 The test result is PASS The test extra info: 1. MAC Loopback 0 2. PHY Loopback -95 3. MMC Counters 0 4. EEE -95 5. Hash Filter MC 0 6. Perfect Filter UC 0 7. MC Filter 0 8. UC Filter 0 9. Flow Control 0 Signed-off-by: Biao Huang <biao.huang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: dwmac4: mac address array boudary violation issueBiao Huang
The mac address array size is GMAC_MAX_PERFECT_ADDRESSES, so the 'reg' should be less than it, or will affect other registers. Signed-off-by: Biao Huang <biao.huang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: dsa: vsc73xx: fix NET_DSA and OF dependenciesArnd Bergmann
The restructuring of the driver got the dependencies wrong: without CONFIG_NET_DSA we get this build failure: WARNING: unmet direct dependencies detected for NET_DSA_VITESSE_VSC73XX Depends on [n]: NETDEVICES [=y] && HAVE_NET_DSA [=y] && OF [=y] && NET_DSA [=n] Selected by [m]: - NET_DSA_VITESSE_VSC73XX_PLATFORM [=m] && NETDEVICES [=y] && HAVE_NET_DSA [=y] && HAS_IOMEM [=y] ERROR: "dsa_unregister_switch" [drivers/net/dsa/vitesse-vsc73xx-core.ko] undefined! ERROR: "dsa_switch_alloc" [drivers/net/dsa/vitesse-vsc73xx-core.ko] undefined! ERROR: "dsa_register_switch" [drivers/net/dsa/vitesse-vsc73xx-core.ko] undefined! Add the appropriate dependencies. Fixes: 95711cd5f0b4 ("net: dsa: vsc73xx: Split vsc73xx driver") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: mvmdio: defer probe of orion-mdio if a clock is not readyJosua Mayer
Defer probing of the orion-mdio interface when getting a clock returns EPROBE_DEFER. This avoids locking up the Armada 8k SoC when mdio is used before all clocks have been enabled. Signed-off-by: Josua Mayer <josua@solid-run.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: mvmdio: print warning when orion-mdio has too many clocksJosua Mayer
Print a warning when device tree specifies more than the maximum of four clocks supported by orion-mdio. Because reading from mdio can lock up the Armada 8k when a required clock is not initialized, it is important to notify the user when a specified clock is ignored. Signed-off-by: Josua Mayer <josua@solid-run.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: mvmdio: allow up to four clocks to be specified for orion-mdioJosua Mayer
Allow up to four clocks to be specified and enabled for the orion-mdio interface, which are required by the Armada 8k and defined in armada-cp110.dtsi. Fixes a hang in probing the mvmdio driver that was encountered on the Clearfog GT 8K with all drivers built as modules, but also affects other boards such as the MacchiatoBIN. Cc: stable@vger.kernel.org Fixes: 96cb43423822 ("net: mvmdio: allow up to three clocks to be specified for orion-mdio") Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Josua Mayer <josua@solid-run.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: netsec: start using buffers if page_pool registration succeededIlias Apalodimas
The current driver starts using page_pool buffers before calling xdp_rxq_info_reg_mem_model(). Start using the buffers after the registration succeeded, so we won't have to call page_pool_request_shutdown() in case of failure Fixes: 5c67bf0ec4d0 ("net: netsec: Use page_pool API") Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: Introducing support for Page PoolJose Abreu
Mapping and unmapping DMA region is an high bottleneck in stmmac driver, specially in the RX path. This commit introduces support for Page Pool API and uses it in all RX queues. With this change, we get more stable troughput and some increase of banwidth with iperf: - MAC1000 - 950 Mbps - XGMAC: 9.22 Gbps Changes from v3: - Use page_pool_destroy() (Ilias) Changes from v2: - Uncoditionally call page_pool_free() (Jesper) Changes from v1: - Use page_pool_get_dma_addr() (Jesper) - Add a comment (Jesper) - Add page_pool_free() call (Jesper) - Reintroduce sync_single_for_device (Arnd / Ilias) Signed-off-by: Jose Abreu <joabreu@synopsys.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: Fix descriptors address being in > 32 bits address spaceJose Abreu
Commit a993db88d17d ("net: stmmac: Enable support for > 32 Bits addressing in XGMAC"), introduced support for > 32 bits addressing in XGMAC but the conversion of descriptors to dma_addr_t was left out. As some devices assing coherent memory in regions > 32 bits we need to set lower and upper value of descriptors address when initializing DMA channels. Luckly, this was working for me because I was assigning CMA to < 4GB address space for performance reasons. Fixes: a993db88d17d ("net: stmmac: Enable support for > 32 Bits addressing in XGMAC") Signed-off-by: Jose Abreu <joabreu@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net: stmmac: Implement RX Coalesce Frames settingJose Abreu
Add support for coalescing RX path by specifying number of frames which don't need to have interrupt on completion bit set. This is only available when RX Watchdog is enabled. Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jose Abreu <joabreu@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09bnxt_en: Add page_pool_destroy() during RX ring cleanup.Michael Chan
Add page_pool_destroy() in bnxt_free_rx_rings() during normal RX ring cleanup, as Ilias has informed us that the following commit has been merged: 1da4bbeffe41 ("net: core: page_pool: add user refcnt and reintroduce page_pool_destroy") The special error handling code to call page_pool_free() can now be removed. bnxt_free_rx_rings() will always be called during normal shutdown or any error paths. Fixes: 322b87ca55f2 ("bnxt_en: add page_pool support") Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org> Cc: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Acked-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-09net/mlx5e: Register devlink ports for physical link, PCI PF, VFsParav Pandit
Register devlink port of physical port, PCI PF and PCI VF flavour for each PF, VF when a given devlink instance is in switchdev mode. Implement ndo_get_devlink_port callback API to make use of registered devlink ports. This eliminates ndo_get_phys_port_name() and ndo_get_port_parent_id() callbacks. Hence, remove them. An example output with 2 VFs, without a PF and single uplink port is below. $devlink port show pci/0000:06:00.0/65535: type eth netdev ens2f0 flavour physical pci/0000:05:00.0/1: type eth netdev eth1 flavour pcivf pfnum 0 vfnum 0 pci/0000:05:00.0/2: type eth netdev eth2 flavour pcivf pfnum 0 vfnum 1 Reviewed-by: Roi Dayan <roid@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08nfp: tls: undo TLS sequence tracking when dropping the frameJakub Kicinski
If driver has to drop the TLS frame it needs to undo the TCP sequence tracking changes, otherwise device will receive segments out of order and drop them. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08nfp: tls: avoid one of the ifdefs for TLSJakub Kicinski
Move the #ifdef CONFIG_TLS_DEVICE a little so we can eliminate the other one. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08nfp: tls: don't leave key material in freed FW cmsg skbsJakub Kicinski
Make sure the contents of the skb which carried key material to the FW is cleared. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08net/tls: don't clear TX resync flag on errorDirk van der Merwe
Introduce a return code for the tls_dev_resync callback. When the driver TX resync fails, kernel can retry the resync again until it succeeds. This prevents drivers from attempting to offload TLS packets if the connection is known to be out of sync. We don't worry about the RX resync since they will be retried naturally as more encrypted records get received. Signed-off-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08nfp: tls: count TSO segments separately for the TLS offloadJakub Kicinski
Count the number of successfully submitted TLS segments, not skbs. This will make it easier to compare the TLS encryption count against other counters. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08nfp: ccm: increase message limitsDirk van der Merwe
Increase the batch limit to consume small message bursts more effectively. Practically, the effect on the 'add' messages is not significant since the mailbox is sized such that the 'add' messages are still limited to the same order of magnitude that it was originally set for. Furthermore, increase the queue size limit to 1024 entries. This further improves the handling of bursts of small control messages. Signed-off-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08nfp: tls: use unique connection ids instead of 4-tuple for TXJakub Kicinski
Connection 4 tuple reuse is slightly problematic - TLS socket and context do not get destroyed until all the associated skbs left the system and all references are released. This leads to stale connection entry in the device preventing addition of new one if the 4 tuple is reused quickly enough. Instead of using read 4 tuple as the key use a unique ID. Set the protocol to TCP and port to 0 to ensure no collisions with real connections. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08nfp: tls: move setting ipver_vlan to a helperJakub Kicinski
Long lines are ugly. No functional changes. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08nfp: tls: ignore queue limits for delete commandsJakub Kicinski
We need to do our best not to drop delete commands, otherwise we will have stale entries in the connection table. Ignore the control message queue limits for delete commands. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08net: phy: Make use of linkmode_mod_bit helperFuqian Huang
linkmode_mod_bit is introduced as a helper function to set/clear bits in a linkmode. Replace the if else code structure with a call to the helper linkmode_mod_bit. Signed-off-by: Fuqian Huang <huangfq.daxian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Two cases of overlapping changes, nothing fancy. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08bonding: fix value exported by Netlink for peer_notif_delayVincent Bernat
IFLA_BOND_PEER_NOTIF_DELAY was set to the value of downdelay instead of peer_notif_delay. After this change, the correct value is exported. Fixes: 07a4ddec3ce9 ("bonding: add an option to specify a delay between peer notifications") Signed-off-by: Vincent Bernat <vincent@bernat.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08coallocate socket_wq with socket itselfAl Viro
socket->wq is assign-once, set when we are initializing both struct socket it's in and struct socket_wq it points to. As the matter of fact, the only reason for separate allocation was the ability to RCU-delay freeing of socket_wq. RCU-delaying the freeing of socket itself gets rid of that need, so we can just fold struct socket_wq into the end of struct socket and simplify the life both for sock_alloc_inode() (one allocation instead of two) and for tun/tap oddballs, where we used to embed struct socket and struct socket_wq into the same structure (now - embedding just the struct socket). Note that reference to struct socket_wq in struct sock does remain a reference - that's unchanged. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08net: pasemi: fix an use-after-free in pasemi_mac_phy_init()Wen Yang
The phy_dn variable is still being used in of_phy_connect() after the of_node_put() call, which may result in use-after-free. Fixes: 1dd2d06c0459 ("net: Rework pasemi_mac driver to use of_mdio infrastructure") Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08net: axienet: fix a potential double free in axienet_probe()Wen Yang
There is a possible use-after-free issue in the axienet_probe(): 1701: np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0); 1702: if (np) { ... 1787: of_node_put(np); ---> released here 1788: lp->eth_irq = platform_get_irq(pdev, 0); 1789: } else { ... 1801: } 1802: if (IS_ERR(lp->dma_regs)) { ... 1805: of_node_put(np); ---> double released here 1806: goto free_netdev; 1807: } We solve this problem by removing the unnecessary of_node_put(). Fixes: 28ef9ebdb64c ("net: axienet: make use of axistream-connected attribute optional") Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Cc: Anirudha Sarangi <anirudh@xilinx.com> Cc: John Linn <John.Linn@xilinx.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Michal Simek <michal.simek@xilinx.com> Cc: Robert Hancock <hancock@sedsystems.ca> Cc: netdev@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Robert Hancock <hancock@sedsystems.ca> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08net: stmmac: enable clause 45 mdio supportKweh Hock Leong
DWMAC4 is capable to support clause 45 mdio communication. This patch enable the feature on stmmac_mdio_write() and stmmac_mdio_read() by following phy_write_mmd() and phy_read_mmd() mdiobus read write implementation format. Reviewed-by: Li, Yifan <yifan2.li@intel.com> Signed-off-by: Kweh Hock Leong <hock.leong.kweh@intel.com> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: Voon Weifeng <weifeng.voon@intel.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08net: mvpp2: cls: Add support for ETHER_FLOWMaxime Chevallier
Users can specify classification actions based on the 'ether' flow type. In that case, this will apply to all ethernet traffic, superseeding flows such as 'udp4' or 'tcp6'. Add support for this flow type in the PPv2 classifier, by mapping the ETHER_FLOW value to the corresponding entries in the classifier. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08net: mvpp2: cls: Report an error for unsupported flow typesMaxime Chevallier
Add a missing check to detect flow types that we don't support, so that user can be informed of this. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08nfp: tls: fix error return code in nfp_net_tls_add()Wei Yongjun
Fix to return negative error code -EINVAL from the error handling case instead of 0, as done elsewhere in this function. Fixes: 1f35a56cf586 ("nfp: tls: add/delete TLS TX connections") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08bnxt_en: add page_pool supportAndy Gospodarek
This removes contention over page allocation for XDP_REDIRECT actions by adding page_pool support per queue for the driver. The performance for XDP_REDIRECT actions scales linearly with the number of cores performing redirect actions when using the page pools instead of the standard page allocator. v2: Fix up the error path from XDP registration, noted by Ilias Apalodimas. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08bnxt_en: optimized XDP_REDIRECT supportAndy Gospodarek
This adds basic support for XDP_REDIRECT in the bnxt_en driver. Next patch adds the more optimized page pool support. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08bnxt_en: Refactor __bnxt_xmit_xdp().Michael Chan
__bnxt_xmit_xdp() is used by XDP_TX and ethtool loopback packet transmit. Refactor it so that it can be re-used by the XDP_REDIRECT logic. Restructure the TX interrupt handler logic to cleanly separate XDP_TX logic in preparation for XDP_REDIRECT. Acked-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-07-08bnxt_en: rename some xdp functionsAndy Gospodarek
Renaming bnxt_xmit_xdp to __bnxt_xmit_xdp to get ready for XDP_REDIRECT support and reduce confusion/namespace collision. Signed-off-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>