diff options
author | Daniel Borkmann <daniel@iogearbox.net> | 2019-11-15 23:49:27 +0100 |
---|---|---|
committer | Daniel Borkmann <daniel@iogearbox.net> | 2019-11-15 23:49:34 +0100 |
commit | 2893c996d8ae021611d02906b9e6fcd0c765fba4 (patch) | |
tree | f22e546f59369fa17873ef1ec30cc40399460de7 /tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c | |
parent | c3d6324f841bab2403be6419986e2b1d1068d423 (diff) | |
parent | d6f39601ec5e708fb666a2ad437c7bef4cfab39b (diff) |
Merge branch 'bpf-trampoline'
Alexei Starovoitov says:
====================
Introduce BPF trampoline that works as a bridge between kernel functions, BPF
programs and other BPF programs.
The first use case is fentry/fexit BPF programs that are roughly equivalent to
kprobe/kretprobe. Unlike k[ret]probe there is practically zero overhead to call
a set of BPF programs before or after kernel function.
The second use case is heavily influenced by pain points in XDP development.
BPF trampoline allows attaching similar fentry/fexit BPF program to any
networking BPF program. It's now possible to see packets on input and output of
any XDP, TC, lwt, cgroup programs without disturbing them. This greatly helps
BPF-based network troubleshooting.
The third use case of BPF trampoline will be explored in the follow up patches.
The BPF trampoline will be used to dynamicly link BPF programs. It's more
generic mechanism than array and link list of programs used in tracing,
networking, cgroups. In many cases it can be used as a replacement for
bpf_tail_call-based program chaining. See [1] for long term design discussion.
v3 -> v4:
- Included Peter's
"86/alternatives: Teach text_poke_bp() to emulate instructions" as a first patch.
If it changes between now and merge window, I'll rebease to newer version.
The patch is necessary to do s/text_poke/text_poke_bp/ in patch 3 to fix the race.
- In patch 4 fixed bpf_trampoline creation race spotted by Andrii.
- Added patch 15 that annotates prog->kern bpf context types. It made patches 16
and 17 cleaner and more generic.
- Addressed Andrii's feedback in other patches.
v2 -> v3:
- Addressed Song's and Andrii's comments
- Fixed few minor bugs discovered while testing
- Added one more libbpf patch
v1 -> v2:
- Addressed Andrii's comments
- Added more test for fentry/fexit to kernel functions. Including stress test
for maximum number of progs per trampoline.
- Fixed a race btf_resolve_helper_id()
- Added a patch to compare BTF types of functions arguments with actual types.
- Added support for attaching BPF program to another BPF program via trampoline
- Converted to use text_poke() API. That's the only viable mechanism to
implement BPF-to-BPF attach. BPF-to-kernel attach can be refactored to use
register_ftrace_direct() whenever it's available.
[1] https://lore.kernel.org/bpf/20191112025112.bhzmrrh2pr76ssnh@ast-mbp.dhcp.thefacebook.com/
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c')
-rw-r--r-- | tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c | 91 |
1 files changed, 91 insertions, 0 deletions
diff --git a/tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c b/tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c new file mode 100644 index 000000000000..981f0474da5a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c @@ -0,0 +1,91 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2019 Facebook */ +#include <linux/bpf.h> +#include "bpf_helpers.h" + +struct sk_buff { + unsigned int len; +}; + +struct args { + struct sk_buff *skb; + ks32 ret; +}; +static volatile __u64 test_result; +SEC("fexit/test_pkt_access") +int test_main(struct args *ctx) +{ + struct sk_buff *skb = ctx->skb; + int len; + + __builtin_preserve_access_index(({ + len = skb->len; + })); + if (len != 74 || ctx->ret != 0) + return 0; + test_result = 1; + return 0; +} + +struct args_subprog1 { + struct sk_buff *skb; + ks32 ret; +}; +static volatile __u64 test_result_subprog1; +SEC("fexit/test_pkt_access_subprog1") +int test_subprog1(struct args_subprog1 *ctx) +{ + struct sk_buff *skb = ctx->skb; + int len; + + __builtin_preserve_access_index(({ + len = skb->len; + })); + if (len != 74 || ctx->ret != 148) + return 0; + test_result_subprog1 = 1; + return 0; +} + +/* Though test_pkt_access_subprog2() is defined in C as: + * static __attribute__ ((noinline)) + * int test_pkt_access_subprog2(int val, volatile struct __sk_buff *skb) + * { + * return skb->len * val; + * } + * llvm optimizations remove 'int val' argument and generate BPF assembly: + * r0 = *(u32 *)(r1 + 0) + * w0 <<= 1 + * exit + * In such case the verifier falls back to conservative and + * tracing program can access arguments and return value as u64 + * instead of accurate types. + */ +struct args_subprog2 { + ku64 args[5]; + ku64 ret; +}; +static volatile __u64 test_result_subprog2; +SEC("fexit/test_pkt_access_subprog2") +int test_subprog2(struct args_subprog2 *ctx) +{ + struct sk_buff *skb = (void *)ctx->args[0]; + __u64 ret; + int len; + + bpf_probe_read_kernel(&len, sizeof(len), + __builtin_preserve_access_index(&skb->len)); + + ret = ctx->ret; + /* bpf_prog_load() loads "test_pkt_access.o" with BPF_F_TEST_RND_HI32 + * which randomizes upper 32 bits after BPF_ALU32 insns. + * Hence after 'w0 <<= 1' upper bits of $rax are random. + * That is expected and correct. Trim them. + */ + ret = (__u32) ret; + if (len != 74 || ret != 148) + return 0; + test_result_subprog2 = 1; + return 0; +} +char _license[] SEC("license") = "GPL"; |