diff options
author | Alexander Lobakin <aleksander.lobakin@intel.com> | 2024-12-03 18:37:24 +0100 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2024-12-05 18:41:06 -0800 |
commit | ca5c94949facce1f67a4a9a9528a27f635ff3e78 (patch) | |
tree | 92336b8821f752fe42f28ff61b898a405393b761 /include/net/xsk_buff_pool.h | |
parent | 1daa6591ab7d6893cbcd8d02c2dbe43af42d36de (diff) |
xsk: align &xdp_buff_xsk harder
After the series "XSk buff on a diet" by Maciej, the greatest pow-2
which &xdp_buff_xsk can be divided got reduced from 16 to 8 on x86_64.
Also, sizeof(xdp_buff_xsk) now is 120 bytes, which, taking the previous
sentence into account, leads to that it leaves 8 bytes at the end of
cacheline, which means an array of buffs will have its elements
messed between the cachelines chaotically.
Use __aligned_largest for this struct. This alignment is usually 16
bytes, which makes it fill two full cachelines and align an array
nicely. ___cacheline_aligned may be excessive here, especially on
arches with 128-256 byte CLs, as well as 32-bit arches (76 -> 96
bytes on MIPS32R2), while not doing better than _largest.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://patch.msgid.link/20241203173733.3181246-2-aleksander.lobakin@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'include/net/xsk_buff_pool.h')
-rw-r--r-- | include/net/xsk_buff_pool.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index bb03cee716b3..7637799b6c19 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -29,7 +29,7 @@ struct xdp_buff_xsk { dma_addr_t frame_dma; struct xsk_buff_pool *pool; struct list_head list_node; -}; +} __aligned_largest; #define XSK_CHECK_PRIV_TYPE(t) BUILD_BUG_ON(sizeof(t) > offsetofend(struct xdp_buff_xsk, cb)) #define XSK_TX_COMPL_FITS(t) BUILD_BUG_ON(sizeof(struct xsk_tx_metadata_compl) > sizeof(t)) |