summaryrefslogtreecommitdiff
path: root/Documentation
diff options
context:
space:
mode:
authorEli Cohen <elic@nvidia.com>2022-05-16 11:47:35 +0300
committerMichael S. Tsirkin <mst@redhat.com>2022-05-18 12:31:31 -0400
commitacde3929492bcb9ceb0df1270230c422b1013798 (patch)
tree319750d3f2f2179f8d99e7535098b8d7d04bc5ac /Documentation
parent42226c989789d8da4af1de0c31070c96726d990c (diff)
vdpa/mlx5: Use consistent RQT size
The current code evaluates RQT size based on the configured number of virtqueues. This can raise an issue in the following scenario: Assume MQ was negotiated. 1. mlx5_vdpa_set_map() gets called. 2. handle_ctrl_mq() is called setting cur_num_vqs to some value, lower than the configured max VQs. 3. A second set_map gets called, but now a smaller number of VQs is used to evaluate the size of the RQT. 4. handle_ctrl_mq() is called with a value larger than what the RQT can hold. This will emit errors and the driver state is compromised. To fix this, we use a new field in struct mlx5_vdpa_net to hold the required number of entries in the RQT. This value is evaluated in mlx5_vdpa_set_driver_features() where we have the negotiated features all set up. In addition to that, we take into consideration the max capability of RQT entries early when the device is added so we don't need to take consider it when creating the RQT. Last, we remove the use of mlx5_vdpa_max_qps() which just returns the max_vas / 2 and make the code clearer. Fixes: 52893733f2c5 ("vdpa/mlx5: Add multiqueue support") Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Eli Cohen <elic@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Diffstat (limited to 'Documentation')
0 files changed, 0 insertions, 0 deletions