summaryrefslogtreecommitdiff
path: root/rust/kernel/processor.rs
diff options
context:
space:
mode:
authorDragos Tatulea <dtatulea@nvidia.com>2025-11-04 08:48:35 +0200
committerJakub Kicinski <kuba@kernel.org>2025-11-05 17:48:37 -0800
commitd8a7ed9586c7579a99e9e2d90988c9eceeee61ff (patch)
tree98c790b246448327f5995da58ca1b719386c1c6f /rust/kernel/processor.rs
parentbacd8d80181ebe34b599a39aa26bf73a44c91e55 (diff)
net/mlx5e: SHAMPO, Fix header formulas for higher MTUs and 64K pages
The MLX5E_SHAMPO_WQ_HEADER_PER_PAGE and MLX5E_SHAMPO_LOG_MAX_HEADER_ENTRY_SIZE macros are used directly in several places under the assumption that there will always be more headers per WQE than headers per page. However, this assumption doesn't hold for 64K page sizes and higher MTUs (> 4K). This can be first observed during header page allocation: ksm_entries will become 0 during alignment to MLX5E_SHAMPO_WQ_HEADER_PER_PAGE. This patch introduces 2 additional members to the mlx5e_shampo_hd struct which are meant to be used instead of the macrose mentioned above. When the number of headers per WQE goes below MLX5E_SHAMPO_WQ_HEADER_PER_PAGE, clamp the number of headers per page and expand the header size accordingly so that the headers for one WQE cover a full page. All the formulas are adapted to use these two new members. Fixes: 945ca432bfd0 ("net/mlx5e: SHAMPO, Drop info array") Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/1762238915-1027590-4-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'rust/kernel/processor.rs')
0 files changed, 0 insertions, 0 deletions