diff options
author | Paolo Abeni <pabeni@redhat.com> | 2023-01-17 10:27:31 +0100 |
---|---|---|
committer | Paolo Abeni <pabeni@redhat.com> | 2023-01-17 10:27:31 +0100 |
commit | 05cb8b39ca59e7bc4e0810a66f59266d76e4671a (patch) | |
tree | 15cc4283ae175c98748a18395edd6f06455ba8e5 /Makefile | |
parent | 501543b4fff0ff70bde28a829eb8835081ccef2f (diff) | |
parent | eedade12f4cb7284555c4c0314485e9575c70ab7 (diff) |
Merge branch 'net-use-kmem_cache_free_bulk-in-kfree_skb_list'
Jesper Dangaard Brouer says:
====================
net: use kmem_cache_free_bulk in kfree_skb_list
The kfree_skb_list function walks SKB (via skb->next) and frees them
individually to the SLUB/SLAB allocator (kmem_cache). It is more
efficient to bulk free them via the kmem_cache_free_bulk API.
Netstack NAPI fastpath already uses kmem_cache bulk alloc and free
APIs for SKBs.
The kfree_skb_list call got an interesting optimization in commit
520ac30f4551 ("net_sched: drop packets after root qdisc lock is
released") that can create a list of SKBs "to_free" e.g. when qdisc
enqueue fails or deliberately chooses to drop . It isn't a normal data
fastpath, but the situation will likely occur when system/qdisc are
under heavy workloads, thus it makes sense to use a faster API for
freeing the SKBs.
E.g. the (often distro default) qdisc fq_codel will drop batches of
packets from fattest elephant flow, default capped at 64 packets (but
adjustable via tc argument drop_batch).
Performance measurements done in [1]:
[1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/kfree_skb_list01.org
====================
Link: https://lore.kernel.org/r/167361788585.531803.686364041841425360.stgit@firesoul
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'Makefile')
0 files changed, 0 insertions, 0 deletions