summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
diff options
context:
space:
mode:
authorJacob Keller <jacob.e.keller@intel.com>2017-04-13 04:45:52 -0400
committerJeff Kirsher <jeffrey.t.kirsher@intel.com>2017-04-19 17:45:07 -0700
commite4b433f4a74196476ccf226e450c4582428641c1 (patch)
treeb66067907f2ab322aaf10b03935db556643bb6fe /drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
parent9dc2e417383815bc6b8239ae2714d145c167b5c8 (diff)
i40e: reset all VFs in parallel when rebuilding PF
When there are a lot of active VFs, it can take multiple seconds to finish resetting all of them during certain flows., which can cause some VFs to fail to wait long enough for the reset to occur. The user might see messages like "Never saw reset" or "Reset never finished" and the VF driver will stop functioning properly. The naive solution would be to simply increase the wait timer. We can get much more clever. Notice that i40e_reset_vf is run in a serialized fashion, and includes lots of delays. There are two prominent delays which take most of the time. First, when we begin resetting VFs, we have multiple 10ms delays which accrue because we reset each VF in a serial fashion. These delays accumulate to almost 4 seconds when handling the maximum number of VFs (128). Secondly, there is a massive 50ms delay for each time we disable queues on a VSI. This delay is necessary to allow HW to finish disabling queues before we restore functionality. However, just like with the first case, we are paying the cost for each VF, rather than disabling all VFs and waiting once. Both of these can be fixed, but required some previous refactoring to handle the special case. First, we will need the i40e_vsi_wait_queues_disabled function which was previously DCB specific. Second, we will need to implement our own i40e_vsi_stop_rings_no_wait function which will handle the stopping of rings without the delays. Finally, implement an i40e_reset_all_vfs function, which will first start the reset of all VFs, and pay the wait cost all at once, rather than serially waiting for each VF before we start processing then next one. After the VF has been reset, we'll disable all the VF queues, and then wait for them to disable. Again, we'll organize the flow such that we pay the wait cost only once. Finally, after we've disabled queues we'll go ahead and begin restoring VF functionality. The result is reducing the wait time by a large factor and ensuring that VFs do not timeout when waiting in the VF driver. Change-ID: Ia6e8cf8d98131b78aec89db78afb8d905c9b12be Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Diffstat (limited to 'drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h')
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
index 37af437daa5d..9495f1422122 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
@@ -124,6 +124,7 @@ int i40e_vc_process_vf_msg(struct i40e_pf *pf, s16 vf_id, u32 v_opcode,
u32 v_retval, u8 *msg, u16 msglen);
int i40e_vc_process_vflr_event(struct i40e_pf *pf);
void i40e_reset_vf(struct i40e_vf *vf, bool flr);
+void i40e_reset_all_vfs(struct i40e_pf *pf, bool flr);
void i40e_vc_notify_vf_reset(struct i40e_vf *vf);
/* VF configuration related iplink handlers */