summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/intel
AgeCommit message (Collapse)Author
2021-12-31Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Alexei Starovoitov says: ==================== pull-request: bpf-next 2021-12-30 The following pull-request contains BPF updates for your *net-next* tree. We've added 72 non-merge commits during the last 20 day(s) which contain a total of 223 files changed, 3510 insertions(+), 1591 deletions(-). The main changes are: 1) Automatic setrlimit in libbpf when bpf is memcg's in the kernel, from Andrii. 2) Beautify and de-verbose verifier logs, from Christy. 3) Composable verifier types, from Hao. 4) bpf_strncmp helper, from Hou. 5) bpf.h header dependency cleanup, from Jakub. 6) get_func_[arg|ret|arg_cnt] helpers, from Jiri. 7) Sleepable local storage, from KP. 8) Extend kfunc with PTR_TO_CTX, PTR_TO_MEM argument support, from Kumar. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-30Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c commit 077cdda764c7 ("net/mlx5e: TC, Fix memory leak with rules with internal port") commit 31108d142f36 ("net/mlx5: Fix some error handling paths in 'mlx5e_tc_add_fdb_flow()'") commit 4390c6edc0fb ("net/mlx5: Fix some error handling paths in 'mlx5e_tc_add_fdb_flow()'") https://lore.kernel.org/all/20211229065352.30178-1-saeed@kernel.org/ net/smc/smc_wr.c commit 49dc9013e34b ("net/smc: Use the bitmap API when applicable") commit 349d43127dac ("net/smc: fix kernel panic caused by race of smc_sock") bitmap_zero()/memset() is removed by the fix Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-30ice: Add flow director support for channel modeKiran Patil
Add support to enable flow-director filter when multiple TCs are configured. Flow director filter can be configured using ethtool (--config-ntuple option). When multiple TCs are configured, each TC is mapped to an unique HW VSI. So VSI corresponding to queue used in filter is identified and flow director context is updated with correct VSI while configuring ntuple filter in HW. Signed-off-by: Kiran Patil <kiran.patil@intel.com> Signed-off-by: Amritha Nambiar <amritha.nambiar@intel.com> Signed-off-by: Sudheer Mogilappagari <sudheer.mogilappagari@intel.com> Tested-by: Bharathi Sreenivas <bharathi.sreenivas@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-12-29igb: support EXTTS on 82580/i354/i350Ruud Bos
Support for the PTP pin function on 82580/i354/i350 based adapters. Because the time registers of these adapters do not have the nice split in second rollovers as the i210 has, the implementation is slightly more complex compared to the i210 implementation. Signed-off-by: Ruud Bos <kernel.hbk@gmail.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-29igb: support PEROUT on 82580/i354/i350Ruud Bos
Support for the PEROUT PTP pin function on 82580/i354/i350 based adapters. Because the time registers of these adapters do not have the nice split in second rollovers as the i210 has, the implementation is slightly more complex compared to the i210 implementation. Signed-off-by: Ruud Bos <kernel.hbk@gmail.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-29igb: move PEROUT and EXTTS isr logic to separate functionsRuud Bos
Remove code duplication in the tsync interrupt handler function by moving this logic to separate functions. This keeps the interrupt handler readable and allows the new functions to be extended for adapter types other than i210. Signed-off-by: Ruud Bos <kernel.hbk@gmail.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-29igb: move SDP config initialization to separate functionRuud Bos
Allow reuse of SDP config struct initialization by moving it to a separate function. Signed-off-by: Ruud Bos <kernel.hbk@gmail.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-29net: Don't include filter.h from net/sock.hJakub Kicinski
sock.h is pretty heavily used (5k objects rebuilt on x86 after it's touched). We can drop the include of filter.h from it and add a forward declaration of struct sk_filter instead. This decreases the number of rebuilt objects when bpf.h is touched from ~5k to ~1k. There's a lot of missing includes this was masking. Primarily in networking tho, this time. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com> Acked-by: Stefano Garzarella <sgarzare@redhat.com> Link: https://lore.kernel.org/bpf/20211229004913.513372-1-kuba@kernel.org
2021-12-28igc: Fix TX timestamp support for non-MSI-X platformsJames McLaughlin
Time synchronization was not properly enabled on non-MSI-X platforms. Fixes: 2c344ae24501 ("igc: Add support for TX timestamping") Signed-off-by: James McLaughlin <james.mclaughlin@qsc.com> Reviewed-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Tested-by: Nechama Kraus <nechamax.kraus@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-28igc: Do not enable crosstimestamping for i225-V modelsVinicius Costa Gomes
It was reported that when PCIe PTM is enabled, some lockups could be observed with some integrated i225-V models. While the issue is investigated, we can disable crosstimestamp for those models and see no loss of functionality, because those models don't have any support for time synchronization. Fixes: a90ec8483732 ("igc: Add support for PTP getcrosststamp()") Link: https://lore.kernel.org/all/924175a188159f4e03bd69908a91e606b574139b.camel@gmx.de/ Reported-by: Stefan Dietrich <roots@gmx.de> Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Tested-by: Nechama Kraus <nechamax.kraus@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-28ixgbevf: switch to napi_build_skb()Alexander Lobakin
napi_build_skb() reuses per-cpu NAPI skbuff_head cache in order to save some cycles on freeing/allocating skbuff_heads on every new Rx or completed Tx. ixgbevf driver runs Tx completion polling cycle right before the Rx one and uses napi_consume_skb() to feed the cache with skbuff_heads of completed entries, so it's never empty and always warm at that moment. Switch to the napi_build_skb() to relax mm pressure on heavy Rx. Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-28ixgbe: switch to napi_build_skb()Alexander Lobakin
napi_build_skb() reuses per-cpu NAPI skbuff_head cache in order to save some cycles on freeing/allocating skbuff_heads on every new Rx or completed Tx. ixgbe driver runs Tx completion polling cycle right before the Rx one and uses napi_consume_skb() to feed the cache with skbuff_heads of completed entries, so it's never empty and always warm at that moment. Switch to the napi_build_skb() to relax mm pressure on heavy Rx. Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-28igc: switch to napi_build_skb()Alexander Lobakin
napi_build_skb() reuses per-cpu NAPI skbuff_head cache in order to save some cycles on freeing/allocating skbuff_heads on every new Rx or completed Tx. igc driver runs Tx completion polling cycle right before the Rx one and uses napi_consume_skb() to feed the cache with skbuff_heads of completed entries, so it's never empty and always warm at that moment. Switch to the napi_build_skb() to relax mm pressure on heavy Rx. Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Nechama Kraus <nechamax.kraus@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-28igb: switch to napi_build_skb()Alexander Lobakin
napi_build_skb() reuses per-cpu NAPI skbuff_head cache in order to save some cycles on freeing/allocating skbuff_heads on every new Rx or completed Tx. igb driver runs Tx completion polling cycle right before the Rx one and uses napi_consume_skb() to feed the cache with skbuff_heads of completed entries, so it's never empty and always warm at that moment. Switch to the napi_build_skb() to relax mm pressure on heavy Rx. Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-28ice: switch to napi_build_skb()Alexander Lobakin
napi_build_skb() reuses per-cpu NAPI skbuff_head cache in order to save some cycles on freeing/allocating skbuff_heads on every new Rx or completed Tx. ice driver runs Tx completion polling cycle right before the Rx one and uses napi_consume_skb() to feed the cache with skbuff_heads of completed entries, so it's never empty and always warm at that moment. Switch to the napi_build_skb() to relax mm pressure on heavy Rx. Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-28iavf: switch to napi_build_skb()Alexander Lobakin
napi_build_skb() reuses per-cpu NAPI skbuff_head cache in order to save some cycles on freeing/allocating skbuff_heads on every new Rx or completed Tx. iavf driver runs Tx completion polling cycle right before the Rx one and uses napi_consume_skb() to feed the cache with skbuff_heads of completed entries, so it's never empty and always warm at that moment. Switch to the napi_build_skb() to relax mm pressure on heavy Rx. Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-28i40e: switch to napi_build_skb()Alexander Lobakin
napi_build_skb() reuses per-cpu NAPI skbuff_head cache in order to save some cycles on freeing/allocating skbuff_heads on every new Rx or completed Tx. i40e driver runs Tx completion polling cycle right before the Rx one and uses napi_consume_skb() to feed the cache with skbuff_heads of completed entries, so it's never empty and always warm at that moment. Switch to the napi_build_skb() to relax mm pressure on heavy Rx. Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-28e1000: switch to napi_build_skb()Alexander Lobakin
napi_build_skb() reuses per-cpu NAPI skbuff_head cache in order to save some cycles on freeing/allocating skbuff_heads on every new Rx or completed Tx element. e1000 driver runs Tx completion polling cycle right before the Rx one. Now that e1000 uses napi_consume_skb() to put skbuff_heads of completed entries into the cache, it will never empty and always warm at that moment. Switch to the napi_build_skb() to relax mm pressure on heavy Rx and increase throughput. Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Tony Brelinski <tony.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-28e1000: switch to napi_consume_skb()Alexander Lobakin
In order to take the best from per-cpu NAPI skbuff_head caches and CPU cycles, let's switch from dev_kfree_skb_any(), which passes skb back to the mm layer, to napi_consume_skb(), which feeds those caches on non-zero budget instead (falls back to the former on 0). Do the replacement in e1000_unmap_and_free_tx_resource(). There are 4 call sites of this function throughout the driver: * e1000_clean_tx_ring(). Slowpath, process context, cleans the whole Tx ring on ifdown. Use budget of 0 here; * e1000_tx_map(). Hotpath, net Tx softirq, unmaps the buffers in case of error. Use 0 as well; * e1000_clean_tx_irq(). Hotpath, NAPI Tx completion polling cycle. As the driver doesn't count completed Tx entries towards the NAPI budget, just use the poll budget of 64 to utilize caches. Apart from being a preparation for switching to napi_build_skb(), this is useful on its own as well, as napi_consume_skb() flushes skb caches by batches of 32 instead of one-at-a-time. Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Tony Brelinski <tony.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-23Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
include/net/sock.h commit 8f905c0e7354 ("inet: fully convert sk->sk_rx_dst to RCU rules") commit 43f51df41729 ("net: move early demux fields close to sk_refcnt") https://lore.kernel.org/all/20211222141641.0caa0ab3@canb.auug.org.au/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-22ice: trivial: fix odd indentingJesse Brandeburg
Fix an odd indent where some code was left indented, and causes smatch to warn: ice_log_pkg_init() warn: inconsistent indenting While here, for consistency, add a break after the default case. This commit has a Fixes: but we caught this while it was only in net-next. Fixes: 247dd97d713c ("ice: Refactor status flow for DDP load") Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Link: https://lore.kernel.org/r/20211221230538.2546315-1-jesse.brandeburg@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-22Merge branch '100GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== 100GbE Intel Wired LAN Driver Updates 2021-12-21 This series contains updates to ice driver only. Karol modifies the reset flow to correct issues with PTP reset. Jake extends PTP support for E822 based devices. This includes a few cleanup patches, that fix some minor issues. In addition, there are some slight refactors to ease the addition of E822 support, followed by adding the new hardware implementation ice_ptp_hw.c. There are a few major differences with E822 support compared to E810 support: *) The E822 device has a Clock Generation Unit which must be initialized in order to generate proper clock frequencies on the output that drives the PTP hardware clock registers *) The E822 PHY is a bit different and requires a more complex initialization procedure which must be rerun any time the link configuration changes. *) The E822 devices support enhanced timestamp calibration by making use of a process called Vernier offset measurement. This allows the hardware to measure phase offset related to the PHY clocks for Serdes and FEC, reducing the inaccuracy of the timestamp relative to the actual packet transmission and receipt. Making use of this requires data gathered from the first transmitted and received packets, and waiting for the PHY to complete the calibration measurements. This is done as part of a new kthread, ov_work. Note that to avoid delay in enabling timestamps, we start the PHY in 'bypass' mode which allows timestamps to be captured without the Vernier calibration measurement. Once the first packets have been sent and received, we then complete the calibration setup and exit bypass mode and begin using the more precise timestamps. According to the datasheet, timestamps without calibration data can be incorrect relative to actual receipt or transmission by up to 1 clock cycle (~1.25 nanoseconds), while calibrated timestamps should be correct to within 1/8th of a clock cycle (~0.15 nanoseconds). *) E822 devices support crosstimestamping via PCIe PTM, which we enable when available on the platform. There is a fair amount of logic required to perform PHY and CGU initialization, which is the vast majority of the new code, but it is fairly self contained within ice_ptp_hw.c, with the exception of monitoring for offset validity being handled by a kthread. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue: ice: support crosstimestamping on E822 devices if supported ice: exit bypass mode once hardware finishes timestamp calibration ice: ensure the hardware Clock Generation Unit is configured ice: implement basic E822 PTP support ice: convert clk_freq capability into time_ref ice: introduce ice_ptp_init_phc function ice: use 'int err' instead of 'int status' in ice_ptp_hw.c ice: PTP: move setting of tstamp_config ice: introduce ice_base_incval function ice: Fix E810 PTP reset flow ==================== Link: https://lore.kernel.org/r/20211221174845.3063640-1-anthony.l.nguyen@intel.com Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-21fm10k: Fix syntax errors in commentsXiang wangx
Delete the redundant word 'by'. Signed-off-by: Xiang wangx <wangxiang@cdjrlc.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21igbvf: Refactor traceKaren Sornek
Refactoring "PF still resetting" message, because previous version looked like a bug - it informed about changes that worked as designed but might confuse users. Changes requested to make message more user-friendly. Signed-off-by: Karen Sornek <karen.sornek@intel.com> Tested-by: Tony Brelinski <tony.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21igb: remove never changed variable `ret_val'Jason Wang
The variable used for return status in `igb_write_xmdio_reg' function is never changed and this function is just need return 0. Thus, the `ret_val' can be removed and return 0 at the end of the `igb_write_xmdio_reg' function. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21igc: Remove obsolete defineSasha Neftin
'MII_CR_FULL_DUPLEX' define not in use. This patch comes to tidy up obsolete define. Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21igc: Remove obsolete maskSasha Neftin
'IGC_CTRL_EXT_LINK_MODE_MASK' not in use. This patch comes to tidy up obsolete define. Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Nechama Kraus <nechamax.kraus@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21igc: Remove obsolete nvm typeSasha Neftin
i225 devices use only spi nvm type. This patch comes to tidy up obsolete nvm types. Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Nechama Kraus <nechamax.kraus@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21igc: Remove unused phy typeSasha Neftin
_phy_none type not in use. Clean up the code accordingly, and get rid of the unused enum line Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Tested-by: Nechama Kraus <nechamax.kraus@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21igc: Remove unused _I_PHY_ID defineSasha Neftin
_I_PHY_ID not in use. Clean up the code accordingly, and get rid of the unused define Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Nechama Kraus <nechamax.kraus@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21ice: support crosstimestamping on E822 devices if supportedJacob Keller
E822 devices on supported platforms can generate a cross timestamp between the platform ART and the device time. This process allows for very precise measurement of the difference between the PTP hardware clock and the platform time. This is only supported if we know the TSC frequency relative to ART, so we do not enable this unless the boot CPU has a known TSC frequency (as required by convert_art_ns_to_tsc). Because PCIe PTM support is not available on all platforms, introduce CONFIG_ICE_HWTS and make it depend on X86 where we know the support exists. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21ice: exit bypass mode once hardware finishes timestamp calibrationJacob Keller
Once the E822 device has sent and received one packet, the hardware computes the internal delay of the PHY using a process known as Vernier calibration. This calibration calculates a more accurate offset for the Tx and Rx timestamps. To make use of this offset, we need to exit the bypass mode. This cannot be done until the PHY has completed offset calibration, as indicated by the offset valid bits. To handle this, introduce a kthread work item which will poll the offset valid bits every few milliseconds seeing if it is safe to exit bypass mode. Once we have finished calibrating the offsets, we can program the total Tx and Rx offset registers and turn off the bypass bit. This allows the hardware to include the more precise vernier calibration offset, and improves the timestamp precision. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21ice: ensure the hardware Clock Generation Unit is configuredJacob Keller
The E822 device has a Clock Generation Unit (CGU) responsible for determining the clock frequency that drives the timers. Ensure this function is initialized when bringing up the PTP support, so that the clock has a known frequency. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21ice: implement basic E822 PTP supportJacob Keller
Implement support for the basic operations needed to enable the PTP hardware clock on E822 devices. This includes implementations for the various PHY access functions, as well as the ability to start and stop the PHY timers. This is different from the E810 device because the configuration depends on link speed, so we cannot just start the PHYs immediately. We must wait until the link is up to get proper values for the speed based initialization. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21ice: convert clk_freq capability into time_refJacob Keller
Convert the clk_freq value into the associated time_ref frequency value for E822 devices. This simplifies determining the time reference value for the clock. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21ice: introduce ice_ptp_init_phc functionJacob Keller
When we enable support for E822 devices, there are some additional steps required to initialize the PTP hardware clock. To make this easier to implement as device-specific behavior, refactor the register setups in ice_ptp_init_owner to a new ice_ptp_init_phc function defined in ice_ptp_hw.c This function will have a common section, and an e810 specific sub-implementation. This will enable easily extending the functionality to cover the E822 specific setup required to initialize the hardware clock generation unit. It also makes it clear which steps are E810 specific vs which ones are necessary for all ice devices. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21ice: use 'int err' instead of 'int status' in ice_ptp_hw.cJacob Keller
The ice_ptp_hw.c file introduced a bunch of uses of "int status" instead of the more traditional "int err" or "int ret". These are actually traditional Linux error codes (as opposed to the recently removed ice_status enumeration values). We're about to add a bunch of new functions to ice_ptp_hw.c. It's normally preferred in the ice driver to use "int ret" or "int err" when dealing with error code values. Instead of making the new functions use "int status" lets just fix all of ice_ptp_hw.c to use "int err". This will match the new functions and ensures a consistent style across at least the PTP related files. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21ice: PTP: move setting of tstamp_configJacob Keller
The tstamp_config structure is being set inside of ice_ptp_cfg_timestamp, which is the function used to set Tx and Rx timestamping during initialization. This function is also used in order to set the PHY port timestamping status. However, it makes sense to always set the tstamp_config directly whenever the ice_set_tx_tstamp or ice_set_rx_tstamp functions are called. Move assignment of tstamp_config into the related functions and out of ice_ptp_cfg_timestamp. Now that we assign the timestamp mode in the relevant functions, we no longer modify the config value in ice_set_timestamp_mode. In turn, we no longer want to copy that config value into the PF cached structure. Instead, this is now the source of truth for actual configuration. On success of ice_set_timestamp_mode, copy the real configured mode back to report it out to userspace. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21ice: introduce ice_base_incval functionJacob Keller
A future change will add additional possible increment values for the E822 device support. To handle this, we want to look up the increment value to use instead of hard coding it to the nominal value for E810 devices. Introduce ice_base_incval as a function to get the best nominal increment value to use. For now, it just returns the E810 value, but will be refactored in the future to look up the value based on the device type and configured clock frequency. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-21ice: Fix E810 PTP reset flowKarol Kolacinski
The PF reset does not reset PHC and PHY clocks so it's unnecessary to stop them and reinitialize after the reset. Configuring timestamping changes the VSI fields so it needs to be performed after VSIs are initialized, which was not done in case of a reset. Suggested-by: Patrick Talbert <ptalbert@redhat.com> Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com> Tested-by: Pasi Vaananen <pvaanane@redhat.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-20igb: fix deadlock caused by taking RTNL in RPM resume pathHeiner Kallweit
Recent net core changes caused an issue with few Intel drivers (reportedly igb), where taking RTNL in RPM resume path results in a deadlock. See [0] for a bug report. I don't think the core changes are wrong, but taking RTNL in RPM resume path isn't needed. The Intel drivers are the only ones doing this. See [1] for a discussion on the issue. Following patch changes the RPM resume path to not take RTNL. [0] https://bugzilla.kernel.org/show_bug.cgi?id=215129 [1] https://lore.kernel.org/netdev/20211125074949.5f897431@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com/t/ Fixes: bd869245a3dc ("net: core: try to runtime-resume detached device in __dev_open") Fixes: f32a21376573 ("ethtool: runtime-resume netdev parent before ethtool ioctl ops") Tested-by: Martin Stolpe <martin.stolpe@gmail.com> Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://lore.kernel.org/r/20211220201844.2714498-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-12-17iavf: Restrict maximum VLAN filters for VIRTCHNL_VF_OFFLOAD_VLAN_V2Brett Creeley
For VIRTCHNL_VF_OFFLOAD_VLAN, PF's would limit the number of VLAN filters a VF was allowed to add. However, by the time the opcode failed, the VLAN netdev had already been added. VIRTCHNL_VF_OFFLOAD_VLAN_V2 added the ability for a PF to tell the VF how many VLAN filters it's allowed to add. Make changes to support that functionality. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-17iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 offload enable/disableBrett Creeley
The new VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability added support that allows the VF to support 802.1Q and 802.1ad VLAN insertion and stripping if successfully negotiated via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS. Multiple changes were needed to support this new functionality. 1. Added new aq_required flags to support any kind of VLAN stripping and insertion offload requests via virtchnl. 2. Added the new method iavf_set_vlan_offload_features() that's used during VF initialization, VF reset, and iavf_set_features() to set the aq_required bits based on the current VLAN offload configuration of the VF's netdev. 3. Added virtchnl handling for VIRTCHNL_OP_ENABLE_STRIPPING_V2, VIRTCHNL_OP_DISABLE_STRIPPING_V2, VIRTCHNL_OP_ENABLE_INSERTION_V2, and VIRTCHNL_OP_ENABLE_INSERTION_V2. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-17iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 hotpathBrett Creeley
The new VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability added support that allows the PF to set the location of the Tx and Rx VLAN tag for insertion and stripping offloads. In order to support this functionality a few changes are needed. 1. Add a new method to cache the VLAN tag location based on negotiated capabilities for the Tx and Rx ring flags. This needs to be called in the initialization and reset paths. 2. Refactor the transmit hotpath to account for the new Tx ring flags. When IAVF_TXR_FLAGS_VLAN_LOC_L2TAG2 is set, then the driver needs to insert the VLAN tag in the L2TAG2 field of the transmit descriptor. When the IAVF_TXRX_FLAGS_VLAN_LOC_L2TAG1 is set, then the driver needs to use the l2tag1 field of the data descriptor (same behavior as before). 3. Refactor the iavf_tx_prepare_vlan_flags() function to simplify transmit hardware VLAN offload functionality by only depending on the skb_vlan_tag_present() function. This can be done because the OS won't request transmit offload for a VLAN unless the driver told the OS it's supported and enabled. 4. Refactor the receive hotpath to account for the new Rx ring flags and VLAN ethertypes. This requires checking the Rx ring flags and descriptor status bits to determine the location of the VLAN tag. Also, since only a single ethertype can be supported at a time, check the enabled netdev features before specifying a VLAN ethertype in __vlan_hwaccel_put_tag(). Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-17iavf: Add support VIRTCHNL_VF_OFFLOAD_VLAN_V2 during netdev configBrett Creeley
Based on VIRTCHNL_VF_OFFLOAD_VLAN_V2, the VF can now support more VLAN capabilities (i.e. 802.1AD offloads and filtering). In order to communicate these capabilities to the netdev layer, the VF needs to parse its VLAN capabilities based on whether it was able to negotiation VIRTCHNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2 or neither of these. In order to support this, add the following functionality: iavf_get_netdev_vlan_hw_features() - This is used to determine the VLAN features that the underlying hardware supports and that can be toggled off/on based on the negotiated capabiltiies. For example, if VIRTCHNL_VF_OFFLOAD_VLAN_V2 was negotiated, then any capability marked with VIRTCHNL_VLAN_TOGGLE can be toggled on/off by the VF. If VIRTCHNL_VF_OFFLOAD_VLAN was negotiated, then only VLAN insertion and/or stripping can be toggled on/off. iavf_get_netdev_vlan_features() - This is used to determine the VLAN features that the underlying hardware supports and that should be enabled by default. For example, if VIRTHCNL_VF_OFFLOAD_VLAN_V2 was negotiated, then any supported capability that has its ethertype_init filed set should be enabled by default. If VIRTCHNL_VF_OFFLOAD_VLAN was negotiated, then filtering, stripping, and insertion should be enabled by default. Also, refactor iavf_fix_features() to take into account the new capabilities. To do this, query all the supported features (enabled by default and toggleable) and make sure the requested change is supported. If VIRTCHNL_VF_OFFLOAD_VLAN_V2 is successfully negotiated, there is no need to check VIRTCHNL_VLAN_TOGGLE here because the driver already told the netdev layer which features can be toggled via netdev->hw_features during iavf_process_config(), so only those features will be requested to change. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-17iavf: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2 negotiationBrett Creeley
In order to support the new VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability the VF driver needs to rework it's initialization state machine and reset flow. This has to be done because successful negotiation of VIRTCHNL_VF_OFFLOAD_VLAN_V2 requires the VF driver to perform a second capability request via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS before configuring the adapter and its netdev. Add the VIRTCHNL_VF_OFFLOAD_VLAN_V2 bit when sending the VIRTHCNL_OP_GET_VF_RESOURECES message. The underlying PF will either support VIRTCHNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2 or neither. Both of these offloads should never be supported together. Based on this, add 2 new states to the initialization state machine: __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS __IAVF_INIT_CONFIG_ADAPTER The __IAVF_INIT_GET_OFFLOAD_VLAN_V2_CAPS state is used to request/store the new VLAN capabilities if and only if VIRTCHNL_VLAN_OFFLOAD_VLAN_V2 was successfully negotiated in the __IAVF_INIT_GET_RESOURCES state. The __IAVF_INIT_CONFIG_ADAPTER state is used to configure the adapter/netdev after the resource requests have finished. The VF will move into this state regardless of whether it successfully negotiated VIRTCHNL_VF_OFFLOAD_VLAN or VIRTCHNL_VF_OFFLOAD_VLAN_V2. Also, add a the new flag IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS and set it during VF reset. If VIRTCHNL_VF_OFFLOAD_VLAN_V2 was successfully negotiated then the VF will request its VLAN capabilities via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS during the reset. This is needed because the PF may change/modify the VF's configuration during VF reset (i.e. modifying the VF's port VLAN configuration). This also, required the VF to call netdev_update_features() since its VLAN features may change during VF reset. Make sure to call this under rtnl_lock(). Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-17ice: xsk: fix cleaned_count settingMaciej Fijalkowski
Currently cleaned_count is initialized to ICE_DESC_UNUSED(rx_ring) and later on during the Rx processing it is incremented per each frame that driver consumed. This can result in excessive buffers requested from xsk pool based on that value. To address this, just drop cleaned_count and pass ICE_DESC_UNUSED(rx_ring) directly as a function argument to ice_alloc_rx_bufs_zc(). Idea is to ask for buffers as many as consumed. Let us also call ice_alloc_rx_bufs_zc unconditionally at the end of ice_clean_rx_irq_zc. This has been changed in that way for corresponding ice_clean_rx_irq, but not here. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Kiran Bhandare <kiranx.bhandare@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-17ice: xsk: allow empty Rx descriptors on XSK ZC data pathMaciej Fijalkowski
Commit ac6f733a7bd5 ("ice: allow empty Rx descriptors") stated that ice HW can produce empty descriptors that are valid and they should be processed. Add this support to xsk ZC path to avoid potential processing problems. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Kiran Bhandare <kiranx.bhandare@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-17ice: xsk: do not clear status_error0 for ntu + nb_buffs descriptorMaciej Fijalkowski
The descriptor that ntu is pointing at when we exit ice_alloc_rx_bufs_zc() should not have its corresponding DD bit cleared as descriptor is not allocated in there and it is not valid for HW usage. The allocation routine at the entry will fill the descriptor that ntu points to after it was set to ntu + nb_buffs on previous call. Even the spec says: "The tail pointer should be set to one descriptor beyond the last empty descriptor in host descriptor ring." Therefore, step away from clearing the status_error0 on ntu + nb_buffs descriptor. Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface") Reported-by: Elza Mathew <elza.mathew@intel.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Kiran Bhandare <kiranx.bhandare@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-17ice: remove dead store on XSK hotpathAlexander Lobakin
The 'if (ntu == rx_ring->count)' block in ice_alloc_rx_buffers_zc() was previously residing in the loop, but after introducing the batched interface it is used only to wrap-around the NTU descriptor, thus no more need to assign 'xdp'. Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface") Signed-off-by: Alexander Lobakin <alexandr.lobakin@intel.com> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Kiran Bhandare <kiranx.bhandare@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>