diff options
author | Florian Westphal <fw@strlen.de> | 2016-11-22 14:44:19 +0100 |
---|---|---|
committer | Pablo Neira Ayuso <pablo@netfilter.org> | 2016-12-06 21:42:19 +0100 |
commit | ae0ac0ed6fcf5af3be0f63eb935f483f44a402d2 (patch) | |
tree | bdc9adb5d4744f958630b4e56defc818c905dbe3 /include | |
parent | f28e15bacedd444608e25421c72eb2cf4527c9ca (diff) |
netfilter: x_tables: pack percpu counter allocations
instead of allocating each xt_counter individually, allocate 4k chunks
and then use these for counter allocation requests.
This should speed up rule evaluation by increasing data locality,
also speeds up ruleset loading because we reduce calls to the percpu
allocator.
As Eric points out we can't use PAGE_SIZE, page_allocator would fail on
arches with 64k page size.
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/netfilter/x_tables.h | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h index 05a94bd32c55..5117e4d2ddfa 100644 --- a/include/linux/netfilter/x_tables.h +++ b/include/linux/netfilter/x_tables.h @@ -403,8 +403,13 @@ static inline unsigned long ifname_compare_aligned(const char *_a, return ret; } +struct xt_percpu_counter_alloc_state { + unsigned int off; + const char __percpu *mem; +}; -bool xt_percpu_counter_alloc(struct xt_counters *counters); +bool xt_percpu_counter_alloc(struct xt_percpu_counter_alloc_state *state, + struct xt_counters *counter); void xt_percpu_counter_free(struct xt_counters *cnt); static inline struct xt_counters * |