summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-04-02selftests/bpf: Using llvm may_goto inline asm for cond_break macroYonghong Song
Currently, cond_break macro uses bytes to encode the may_goto insn. Patch [1] in llvm implemented may_goto insn in BPF backend. Replace byte-level encoding with llvm inline asm for better usability. Using llvm may_goto insn is controlled by macro __BPF_FEATURE_MAY_GOTO. [1] https://github.com/llvm/llvm-project/commit/0e0bfacff71859d1f9212205f8f873d47029d3fb Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240402025446.3215182-1-yonghong.song@linux.dev
2024-04-02bpf: Add a verbose message if map limit is reachedAnton Protopopov
When more than 64 maps are used by a program and its subprograms the verifier returns -E2BIG. Add a verbose message which highlights the source of the error and also print the actual limit. Signed-off-by: Anton Protopopov <aspsk@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20240402073347.195920-1-aspsk@isovalent.com
2024-04-02bpf: Fix typo in uapi doc commentsDavid Lechner
In a few places in the bpf uapi headers, EOPNOTSUPP is missing a "P" in the doc comments. This adds the missing "P". Signed-off-by: David Lechner <dlechner@baylibre.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240329152900.398260-2-dlechner@baylibre.com
2024-04-02bpftool: Clean-up typos, punctuation, list formatting in docsRameez Rehman
Improve the formatting of the attach flags for cgroup programs in the relevant man page, and fix typos ("can be on of", "an userspace inet socket") when introducing that list. Also fix a couple of other trivial issues in docs. [ Quentin: Fixed trival issues in bpftool-gen.rst and bpftool-iter.rst ] Signed-off-by: Rameez Rehman <rameezrehman408@hotmail.com> Signed-off-by: Quentin Monnet <qmo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240331200346.29118-4-qmo@kernel.org
2024-04-02bpftool: Remove useless emphasis on command description in man pagesRameez Rehman
As it turns out, the terms in definition lists in the rST file are already rendered with bold-ish formatting when generating the man pages; all double-star sequences we have in the commands for the command description are unnecessary, and can be removed to make the documentation easier to read. The rST files were automatically processed with: sed -i '/DESCRIPTION/,/OPTIONS/ { /^\*/ s/\*\*//g }' b*.rst Signed-off-by: Rameez Rehman <rameezrehman408@hotmail.com> Signed-off-by: Quentin Monnet <qmo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240331200346.29118-3-qmo@kernel.org
2024-04-02bpftool: Use simpler indentation in source rST for documentationRameez Rehman
The rST manual pages for bpftool would use a mix of tabs and spaces for indentation. While this is the norm in C code, this is rather unusual for rST documents, and over time we've seen many contributors use a wrong level of indentation for documentation update. Let's fix bpftool's indentation in docs once and for all: - Let's use spaces, that are more common in rST files. - Remove one level of indentation for the synopsis, the command description, and the "see also" section. As a result, all sections start with the same indentation level in the generated man page. - Rewrap the paragraphs after the changes. There is no content change in this patch, only indentation and rewrapping changes. The wrapping in the generated source files for the manual pages is changed, but the pages displayed with "man" remain the same, apart from the adjusted indentation level on relevant sections. [ Quentin: rebased on bpf-next, removed indent level for command description and options, updated synopsis, command summary, and "see also" sections. ] Signed-off-by: Rameez Rehman <rameezrehman408@hotmail.com> Signed-off-by: Quentin Monnet <qmo@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240331200346.29118-2-qmo@kernel.org
2024-03-29selftests/bpf: make multi-uprobe tests work in RELEASE=1 modeAndrii Nakryiko
When BPF selftests are built in RELEASE=1 mode with -O2 optimization level, uprobe_multi binary, called from multi-uprobe tests is optimized to the point that all the thousands of target uprobe_multi_func_XXX functions are eliminated, breaking tests. So ensure they are preserved by using weak attribute. But, actually, compiling uprobe_multi binary with -O2 takes a really long time, and is quite useless (it's not a benchmark). So in addition to ensuring that uprobe_multi_func_XXX functions are preserved, opt-out of -O2 explicitly in Makefile and stick to -O0. This saves a lot of compilation time. With -O2, just recompiling uprobe_multi: $ touch uprobe_multi.c $ time make RELEASE=1 -j90 make RELEASE=1 -j90 291.66s user 2.54s system 99% cpu 4:55.52 total With -O0: $ touch uprobe_multi.c $ time make RELEASE=1 -j90 make RELEASE=1 -j90 22.40s user 1.91s system 99% cpu 24.355 total 5 minutes vs (still slow, but...) 24 seconds. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240329190410.4191353-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-29bpf: Avoid kfree_rcu() under lock in bpf_lpm_trie.Alexei Starovoitov
syzbot reported the following lock sequence: cpu 2: grabs timer_base lock spins on bpf_lpm lock cpu 1: grab rcu krcp lock spins on timer_base lock cpu 0: grab bpf_lpm lock spins on rcu krcp lock bpf_lpm lock can be the same. timer_base lock can also be the same due to timer migration. but rcu krcp lock is always per-cpu, so it cannot be the same lock. Hence it's a false positive. To avoid lockdep complaining move kfree_rcu() after spin_unlock. Reported-by: syzbot+1fa663a2100308ab6eab@syzkaller.appspotmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240329171439.37813-1-alexei.starovoitov@gmail.com
2024-03-28Merge branch 'Use start_server and connect_fd_to_fd'Martin KaFai Lau
Geliang Tang says: ==================== Simplify bpf_tcp_ca test by using connect_fd_to_fd and start_server helpers. v4: - Matt reminded me that I shouldn't send a square-to patch to BPF (thanks), so I update them into two patches in v4. v3: - split v2 as two patches as Daniel suggested. - The patch "selftests/bpf: Use start_server in bpf_tcp_ca" is merged by Daniel (thanks), but I forgot to drop 'settimeo(lfd, 0)' in it, so I send a squash-to patch to fix this. ==================== Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-28selftests/bpf: Drop settimeo in do_testGeliang Tang
settimeo is invoked in start_server() and in connect_fd_to_fd() already, no need to invoke settimeo(lfd, 0) and settimeo(fd, 0) in do_test() anymore. This patch drops them. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Link: https://lore.kernel.org/r/dbc3613bee3b1c78f95ac9ff468bf47c92f106ea.1711447102.git.tanggeliang@kylinos.cn Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-28selftests/bpf: Use connect_fd_to_fd in bpf_tcp_caGeliang Tang
To simplify the code, use BPF selftests helper connect_fd_to_fd() in bpf_tcp_ca.c instead of open-coding it. This helper is defined in network_helpers.c, and exported in network_helpers.h, which is already included in bpf_tcp_ca.c. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Link: https://lore.kernel.org/r/e105d1f225c643bee838409378dd90fd9aabb6dc.1711447102.git.tanggeliang@kylinos.cn Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-28bpf: Mark bpf prog stack with kmsan_unposion_memory in interpreter modeMartin KaFai Lau
syzbot reported uninit memory usages during map_{lookup,delete}_elem. ========== BUG: KMSAN: uninit-value in __dev_map_lookup_elem kernel/bpf/devmap.c:441 [inline] BUG: KMSAN: uninit-value in dev_map_lookup_elem+0xf3/0x170 kernel/bpf/devmap.c:796 __dev_map_lookup_elem kernel/bpf/devmap.c:441 [inline] dev_map_lookup_elem+0xf3/0x170 kernel/bpf/devmap.c:796 ____bpf_map_lookup_elem kernel/bpf/helpers.c:42 [inline] bpf_map_lookup_elem+0x5c/0x80 kernel/bpf/helpers.c:38 ___bpf_prog_run+0x13fe/0xe0f0 kernel/bpf/core.c:1997 __bpf_prog_run256+0xb5/0xe0 kernel/bpf/core.c:2237 ========== The reproducer should be in the interpreter mode. The C reproducer is trying to run the following bpf prog: 0: (18) r0 = 0x0 2: (18) r1 = map[id:49] 4: (b7) r8 = 16777216 5: (7b) *(u64 *)(r10 -8) = r8 6: (bf) r2 = r10 7: (07) r2 += -229 ^^^^^^^^^^ 8: (b7) r3 = 8 9: (b7) r4 = 0 10: (85) call dev_map_lookup_elem#1543472 11: (95) exit It is due to the "void *key" (r2) passed to the helper. bpf allows uninit stack memory access for bpf prog with the right privileges. This patch uses kmsan_unpoison_memory() to mark the stack as initialized. This should address different syzbot reports on the uninit "void *key" argument during map_{lookup,delete}_elem. Reported-by: syzbot+603bcd9b0bf1d94dbb9b@syzkaller.appspotmail.com Closes: https://lore.kernel.org/bpf/000000000000f9ce6d061494e694@google.com/ Reported-by: syzbot+eb02dc7f03dce0ef39f3@syzkaller.appspotmail.com Closes: https://lore.kernel.org/bpf/000000000000a5c69c06147c2238@google.com/ Reported-by: syzbot+b4e65ca24fd4d0c734c3@syzkaller.appspotmail.com Closes: https://lore.kernel.org/bpf/000000000000ac56fb06143b6cfa@google.com/ Reported-by: syzbot+d2b113dc9fea5e1d2848@syzkaller.appspotmail.com Closes: https://lore.kernel.org/bpf/0000000000000d69b206142d1ff7@google.com/ Reported-by: syzbot+1a3cf6f08d68868f9db3@syzkaller.appspotmail.com Closes: https://lore.kernel.org/bpf/0000000000006f876b061478e878@google.com/ Tested-by: syzbot+1a3cf6f08d68868f9db3@syzkaller.appspotmail.com Suggested-by: Yonghong Song <yonghong.song@linux.dev> Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20240328185801.1843078-1-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28Merge branch 'bpf-fix-a-couple-of-test-failures-with-lto-kernel'Alexei Starovoitov
Yonghong Song says: ==================== bpf: Fix a couple of test failures with LTO kernel With a LTO kernel built with clang, with one of earlier version of kernel, I encountered two test failures, ksyms and kprobe_multi_bench_attach/kernel. Now with latest bpf-next, only kprobe_multi_bench_attach/kernel failed. But it is possible in the future ksyms selftest may fail again. Both test failures are due to static variable/function renaming due to cross-file inlining. For Ksyms failure, the solution is to strip .llvm.<hash> suffixes for symbols in /proc/kallsyms before comparing against the ksym in bpf program. For kprobe_multi_bench_attach/kernel failure, the solution is to either provide names in /proc/kallsyms to the kernel or ignore those names who have .llvm.<hash> suffix since the kernel sym name comparison is against /proc/kallsyms. Please see each individual patches for details. Changelogs: v2 -> v3: - no need to check config file, directly so strstr with '.llvm.'. - for kprobe_multi_bench with syms, instead of skipping the syms, consult /proc/kallyms to find corresponding names. - add a test with populating addrs to the kernel for kprobe multi attach. v1 -> v2: - Let libbpf handle .llvm.<hash suffixes since it may impact bpf program ksym. ==================== Link: https://lore.kernel.org/r/20240326041443.1197498-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: Add a kprobe_multi subtest to use addrs instead of symsYonghong Song
Get addrs directly from available_filter_functions_addrs and send to the kernel during kprobe_multi_attach. This avoids consultation of /proc/kallsyms. But available_filter_functions_addrs is introduced in 6.5, i.e., it is introduced recently, so I skip the test if the kernel does not support it. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240326041523.1200301-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: Fix kprobe_multi_bench_attach test failure with LTO kernelYonghong Song
In my locally build clang LTO kernel (enabling CONFIG_LTO and CONFIG_LTO_CLANG_THIN), kprobe_multi_bench_attach/kernel subtest failed like: test_kprobe_multi_bench_attach:PASS:get_syms 0 nsec test_kprobe_multi_bench_attach:PASS:kprobe_multi_empty__open_and_load 0 nsec libbpf: prog 'test_kprobe_empty': failed to attach: No such process test_kprobe_multi_bench_attach:FAIL:bpf_program__attach_kprobe_multi_opts unexpected error: -3 #117/1 kprobe_multi_bench_attach/kernel:FAIL There are multiple symbols in /sys/kernel/debug/tracing/available_filter_functions are renamed in /proc/kallsyms due to cross file inlining. One example is for static function __access_remote_vm in mm/memory.c. In a non-LTO kernel, we have the following call stack: ptrace_access_vm (global, kernel/ptrace.c) access_remote_vm (global, mm/memory.c) __access_remote_vm (static, mm/memory.c) With LTO kernel, it is possible that access_remote_vm() is inlined by ptrace_access_vm(). So we end up with the following call stack: ptrace_access_vm (global, kernel/ptrace.c) __access_remote_vm (static, mm/memory.c) The compiler renames __access_remote_vm to __access_remote_vm.llvm.<hash> to prevent potential name collision. The kernel bpf_kprobe_multi_link_attach() and ftrace_lookup_symbols() try to find addresses based on /proc/kallsyms, hence the current test failed with LTO kenrel. This patch consulted /proc/kallsyms to find the corresponding entries for the ksym and this solved the issue. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240326041518.1199758-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: Add {load,search}_kallsyms_custom_local()Yonghong Song
These two functions allow selftests to do loading/searching kallsyms based on their specific compare functions. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240326041513.1199440-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: Refactor trace helper func load_kallsyms_local()Yonghong Song
Refactor trace helper function load_kallsyms_local() such that it invokes a common function with a compare function as input. The common function will be used later for other local functions. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240326041508.1199239-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: Refactor some functions for kprobe_multi_testYonghong Song
Refactor some functions in kprobe_multi_test.c to extract some helper functions who will be used in later patches to avoid code duplication. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240326041503.1198982-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28libbpf: Handle <orig_name>.llvm.<hash> symbol properlyYonghong Song
With CONFIG_LTO_CLANG_THIN enabled, with some of previous version of kernel code base ([1]), I hit the following error: test_ksyms:PASS:kallsyms_fopen 0 nsec test_ksyms:FAIL:ksym_find symbol 'bpf_link_fops' not found #118 ksyms:FAIL The reason is that 'bpf_link_fops' is renamed to bpf_link_fops.llvm.8325593422554671469 Due to cross-file inlining, the static variable 'bpf_link_fops' in syscall.c is used by a function in another file. To avoid potential duplicated names, the llvm added suffix '.llvm.<hash>' ([2]) to 'bpf_link_fops' variable. Such renaming caused a problem in libbpf if 'bpf_link_fops' is used in bpf prog as a ksym but 'bpf_link_fops' does not match any symbol in /proc/kallsyms. To fix this issue, libbpf needs to understand that suffix '.llvm.<hash>' is caused by clang lto kernel and to process such symbols properly. With latest bpf-next code base built with CONFIG_LTO_CLANG_THIN, I cannot reproduce the above failure any more. But such an issue could happen with other symbols or in the future for bpf_link_fops symbol. For example, with my current kernel, I got the following from /proc/kallsyms: ffffffff84782154 d __func__.net_ratelimit.llvm.6135436931166841955 ffffffff85f0a500 d tk_core.llvm.726630847145216431 ffffffff85fdb960 d __fs_reclaim_map.llvm.10487989720912350772 ffffffff864c7300 d fake_dst_ops.llvm.54750082607048300 I could not easily create a selftest to test newly-added libbpf functionality with a static C test since I do not know which symbol is cross-file inlined. But based on my particular kernel, the following test change can run successfully. > diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms.c b/tools/testing/selftests/bpf/prog_tests/ksyms.c > index 6a86d1f07800..904a103f7b1d 100644 > --- a/tools/testing/selftests/bpf/prog_tests/ksyms.c > +++ b/tools/testing/selftests/bpf/prog_tests/ksyms.c > @@ -42,6 +42,7 @@ void test_ksyms(void) > ASSERT_EQ(data->out__bpf_link_fops, link_fops_addr, "bpf_link_fops"); > ASSERT_EQ(data->out__bpf_link_fops1, 0, "bpf_link_fops1"); > ASSERT_EQ(data->out__btf_size, btf_size, "btf_size"); > + ASSERT_NEQ(data->out__fake_dst_ops, 0, "fake_dst_ops"); > ASSERT_EQ(data->out__per_cpu_start, per_cpu_start_addr, "__per_cpu_start"); > > cleanup: > diff --git a/tools/testing/selftests/bpf/progs/test_ksyms.c b/tools/testing/selftests/bpf/progs/test_ksyms.c > index 6c9cbb5a3bdf..fe91eef54b66 100644 > --- a/tools/testing/selftests/bpf/progs/test_ksyms.c > +++ b/tools/testing/selftests/bpf/progs/test_ksyms.c > @@ -9,11 +9,13 @@ __u64 out__bpf_link_fops = -1; > __u64 out__bpf_link_fops1 = -1; > __u64 out__btf_size = -1; > __u64 out__per_cpu_start = -1; > +__u64 out__fake_dst_ops = -1; > > extern const void bpf_link_fops __ksym; > extern const void __start_BTF __ksym; > extern const void __stop_BTF __ksym; > extern const void __per_cpu_start __ksym; > +extern const void fake_dst_ops __ksym; > /* non-existing symbol, weak, default to zero */ > extern const void bpf_link_fops1 __ksym __weak; > > @@ -23,6 +25,7 @@ int handler(const void *ctx) > out__bpf_link_fops = (__u64)&bpf_link_fops; > out__btf_size = (__u64)(&__stop_BTF - &__start_BTF); > out__per_cpu_start = (__u64)&__per_cpu_start; > + out__fake_dst_ops = (__u64)&fake_dst_ops; > > out__bpf_link_fops1 = (__u64)&bpf_link_fops1; This patch fixed the issue in libbpf such that the suffix '.llvm.<hash>' will be ignored during comparison of bpf prog ksym vs. symbols in /proc/kallsyms, this resolved the issue. Currently, only static variables in /proc/kallsyms are checked with '.llvm.<hash>' suffix since in bpf programs function ksyms with '.llvm.<hash>' suffix are most likely kfunc's and unlikely to be cross-file inlined. Note that currently kernel does not support gcc build with lto. [1] https://lore.kernel.org/bpf/20240302165017.1627295-1-yonghong.song@linux.dev/ [2] https://github.com/llvm/llvm-project/blob/release/18.x/llvm/include/llvm/IR/ModuleSummaryIndex.h#L1714-L1719 Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240326041458.1198161-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28libbpf: Mark libbpf_kallsyms_parse static functionYonghong Song
Currently libbpf_kallsyms_parse() function is declared as a global function but actually it is not a API and there is no external users in bpftool/bpf-selftests. So let us mark the function as static. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240326041453.1197949-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: Replace CHECK with ASSERT macros for ksyms testYonghong Song
Replace CHECK with ASSERT macros for ksyms tests. This test failed earlier with clang lto kernel, but the issue is gone with latest code base. But replacing CHECK with ASSERT still improves code as ASSERT is preferred in selftests. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240326041448.1197812-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: Test loading bpf-tcp-cc prog calling the kernel tcp-cc kfuncsMartin KaFai Lau
This patch adds a test to ensure all static tcp-cc kfuncs is visible to the struct_ops bpf programs. It is checked by successfully loading the struct_ops programs calling these tcp-cc kfuncs. This patch needs to enable the CONFIG_TCP_CONG_DCTCP and the CONFIG_TCP_CONG_BBR. Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20240322191433.4133280-2-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28bpf: Remove CONFIG_X86 and CONFIG_DYNAMIC_FTRACE guard from the tcp-cc kfuncsMartin KaFai Lau
The commit 7aae231ac93b ("bpf: tcp: Limit calling some tcp cc functions to CONFIG_DYNAMIC_FTRACE") added CONFIG_DYNAMIC_FTRACE guard because pahole was only generating btf for ftrace-able functions. The ftrace filter had already been removed from pahole, so the CONFIG_DYNAMIC_FTRACE guard can be removed. The commit 569c484f9995 ("bpf: Limit static tcp-cc functions in the .BTF_ids list to x86") has added CONFIG_X86 guard because it failed the powerpc arch which prepended a "." to the local static function, so "cubictcp_init" becomes ".cubictcp_init". "__bpf_kfunc" has been added to kfunc since then and it uses the __unused compiler attribute. There is an existing "__bpf_kfunc static u32 bpf_kfunc_call_test_static_unused_arg(u32 arg, u32 unused)" test in bpf_testmod.c to cover the static kfunc case. cross compile on ppc64 with CONFIG_DYNAMIC_FTRACE disabled: > readelf -s vmlinux | grep cubictcp_ 56938: c00000000144fd00 184 FUNC LOCAL DEFAULT 2 cubictcp_cwnd_event [<localentry>: 8] 56939: c00000000144fdb8 200 FUNC LOCAL DEFAULT 2 cubictcp_recalc_[...] [<localentry>: 8] 56940: c00000000144fe80 296 FUNC LOCAL DEFAULT 2 cubictcp_init [<localentry>: 8] 56941: c00000000144ffa8 228 FUNC LOCAL DEFAULT 2 cubictcp_state [<localentry>: 8] 56942: c00000000145008c 1908 FUNC LOCAL DEFAULT 2 cubictcp_cong_avoid [<localentry>: 8] 56943: c000000001450800 1644 FUNC LOCAL DEFAULT 2 cubictcp_acked [<localentry>: 8] > bpftool btf dump file vmlinux | grep cubictcp_ [51540] FUNC 'cubictcp_acked' type_id=38137 linkage=static [51541] FUNC 'cubictcp_cong_avoid' type_id=38122 linkage=static [51543] FUNC 'cubictcp_cwnd_event' type_id=51542 linkage=static [51544] FUNC 'cubictcp_init' type_id=9186 linkage=static [51545] FUNC 'cubictcp_recalc_ssthresh' type_id=35021 linkage=static [51547] FUNC 'cubictcp_state' type_id=38141 linkage=static The patch removed both config guards. Cc: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20240322191433.4133280-1-martin.lau@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28bpf: Mitigate latency spikes associated with freeing non-preallocated htabYafang Shao
Following the recent upgrade of one of our BPF programs, we encountered significant latency spikes affecting other applications running on the same host. After thorough investigation, we identified that these spikes were primarily caused by the prolonged duration required to free a non-preallocated htab with approximately 2 million keys. Notably, our kernel configuration lacks the presence of CONFIG_PREEMPT. In scenarios where kernel execution extends excessively, other threads might be starved of CPU time, resulting in latency issues across the system. To mitigate this, we've adopted a proactive approach by incorporating cond_resched() calls within the kernel code. This ensures that during lengthy kernel operations, the scheduler is invoked periodically to provide opportunities for other threads to execute. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240327032022.78391-1-laoar.shao@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28Merge branch 'bench-fast-in-kernel-triggering-benchmarks'Alexei Starovoitov
Andrii Nakryiko says: ==================== bench: fast in-kernel triggering benchmarks Remove "legacy" triggering benchmarks which rely on syscalls (and thus syscall overhead is a noticeable part of benchmark, unfortunately). Replace them with faster versions that rely on triggering BPF programs in-kernel through another simple "driver" BPF program. See patch #2 with comparison results. raw_tp/tp/fmodret benchmarks required adding a simple kfunc in kernel to be able to trigger a simple tracepoint from BPF program (plus it is also allowed to be replaced by fmod_ret programs). This limits raw_tp/tp/fmodret benchmarks to new kernels only, but it keeps bench tool itself very portable and most of other benchmarks will still work on wide variety of kernels without the need to worry about building and deploying custom kernel module. See patches #5 and #6 for details. v1->v2: - move new TP closer to BPF test run code; - rename/move kfunc and register it for fmod_rets (Alexei); - limit --trig-batch-iters param to [1, 1000] (Alexei). ==================== Link: https://lore.kernel.org/r/20240326162151.3981687-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: add batched tp/raw_tp/fmodret testsAndrii Nakryiko
Utilize bpf_modify_return_test_tp() kfunc to have a fast way to trigger tp/raw_tp/fmodret programs from another BPF program, which gives us comparable batched benchmarks to (batched) kprobe/fentry benchmarks. We don't switch kprobe/fentry batched benchmarks to this kfunc to make bench tool usable on older kernels as well. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240326162151.3981687-7-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28bpf: add bpf_modify_return_test_tp() kfunc triggering tracepointAndrii Nakryiko
Add a simple bpf_modify_return_test_tp() kfunc, available to all program types, that is useful for various testing and benchmarking scenarios, as it allows to trigger most tracing BPF program types from BPF side, allowing to do complex testing and benchmarking scenarios. It is also attachable to for fmod_ret programs, making it a good and simple way to trigger fmod_ret program under test/benchmark. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240326162151.3981687-6-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: lazy-load trigger bench BPF programsAndrii Nakryiko
Instead of front-loading all possible benchmarking BPF programs for trigger benchmarks, explicitly specify which BPF programs are used by specific benchmark and load only it. This allows to be more flexible in supporting older kernels, where some program types might not be possible to load (e.g., those that rely on newly added kfunc). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240326162151.3981687-5-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: remove syscall-driven benchs, keep syscall-count onlyAndrii Nakryiko
Remove "legacy" benchmarks triggered by syscalls in favor of newly added in-kernel/batched benchmarks. Drop -batched suffix now as well. Next patch will restore "feature parity" by adding back tp/raw_tp/fmodret benchmarks based on in-kernel kfunc approach. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240326162151.3981687-4-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: add batched, mostly in-kernel BPF triggering benchmarksAndrii Nakryiko
Existing kprobe/fentry triggering benchmarks have 1-to-1 mapping between one syscall execution and BPF program run. While we use a fast get_pgid() syscall, syscall overhead can still be non-trivial. This patch adds kprobe/fentry set of benchmarks significantly amortizing the cost of syscall vs actual BPF triggering overhead. We do this by employing BPF_PROG_TEST_RUN command to trigger "driver" raw_tp program which does a tight parameterized loop calling cheap BPF helper (bpf_get_numa_node_id()), to which kprobe/fentry programs are attached for benchmarking. This way 1 bpf() syscall causes N executions of BPF program being benchmarked. N defaults to 100, but can be adjusted with --trig-batch-iters CLI argument. For comparison we also implement a new baseline program that instead of triggering another BPF program just does N atomic per-CPU counter increments, establishing the limit for all other types of program within this batched benchmarking setup. Taking the final set of benchmarks added in this patch set (including tp/raw_tp/fmodret, added in later patch), and keeping for now "legacy" syscall-driven benchmarks, we can capture all triggering benchmarks in one place for comparison, before we remove the legacy ones (and rename xxx-batched into just xxx). $ benchs/run_bench_trigger.sh usermode-count : 79.500 ± 0.024M/s kernel-count : 49.949 ± 0.081M/s syscall-count : 9.009 ± 0.007M/s fentry-batch : 31.002 ± 0.015M/s fexit-batch : 20.372 ± 0.028M/s fmodret-batch : 21.651 ± 0.659M/s rawtp-batch : 36.775 ± 0.264M/s tp-batch : 19.411 ± 0.248M/s kprobe-batch : 12.949 ± 0.220M/s kprobe-multi-batch : 15.400 ± 0.007M/s kretprobe-batch : 5.559 ± 0.011M/s kretprobe-multi-batch: 5.861 ± 0.003M/s fentry-legacy : 8.329 ± 0.004M/s fexit-legacy : 6.239 ± 0.003M/s fmodret-legacy : 6.595 ± 0.001M/s rawtp-legacy : 8.305 ± 0.004M/s tp-legacy : 6.382 ± 0.001M/s kprobe-legacy : 5.528 ± 0.003M/s kprobe-multi-legacy : 5.864 ± 0.022M/s kretprobe-legacy : 3.081 ± 0.001M/s kretprobe-multi-legacy: 3.193 ± 0.001M/s Note how xxx-batch variants are measured with significantly higher throughput, even though it's exactly the same in-kernel overhead. As such, results can be compared only between benchmarks of the same kind (syscall vs batched): fentry-legacy : 8.329 ± 0.004M/s fentry-batch : 31.002 ± 0.015M/s kprobe-multi-legacy : 5.864 ± 0.022M/s kprobe-multi-batch : 15.400 ± 0.007M/s Note also that syscall-count is setting a theoretical limit for syscall-triggered benchmarks, while kernel-count is setting similar limits for batch variants. usermode-count is a happy and unachievable case of user space counting without doing any syscalls, and is mostly the measure of CPU speed for such a trivial benchmark. As was mentioned, tp/raw_tp/fmodret require kernel-side kfunc to produce similar benchmark, which we address in a separate patch. Note that run_bench_trigger.sh allows to override a list of benchmarks to run, which is very useful for performance work. Cc: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240326162151.3981687-3-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: rename and clean up userspace-triggered benchmarksAndrii Nakryiko
Rename uprobe-base to more precise usermode-count (it will match other baseline-like benchmarks, kernel-count and syscall-count). Also use BENCH_TRIG_USERMODE() macro to define all usermode-based triggering benchmarks, which include usermode-count and uprobe/uretprobe benchmarks. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240326162151.3981687-2-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28bpf,arena: Use helper sizeof_field in struct accessorsHaiyue Wang
Use the well defined helper sizeof_field() to calculate the size of a struct member, instead of doing custom calculations. Signed-off-by: Haiyue Wang <haiyue.wang@intel.com> Link: https://lore.kernel.org/r/20240327065334.8140-1-haiyue.wang@intel.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28bpf: improve error message for unsupported helperMykyta Yatsenko
BPF verifier emits "unknown func" message when given BPF program type does not support BPF helper. This message may be confusing for users, as important context that helper is unknown only to current program type is not provided. This patch changes message to "program of this type cannot use helper " and aligns dependent code in libbpf and tests. Any suggestions on improving/changing this message are welcome. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <qmo@kernel.org> Link: https://lore.kernel.org/r/20240325152210.377548-1-yatsenko@meta.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28bpf: Add a check for struct bpf_fib_lookup sizeAnton Protopopov
The struct bpf_fib_lookup should not grow outside of its 64 bytes. Add a static assert to validate this. Suggested-by: David Ahern <dsahern@kernel.org> Signed-off-by: Anton Protopopov <aspsk@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240326101742.17421-4-aspsk@isovalent.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28selftests/bpf: Add BPF_FIB_LOOKUP_MARK testsAnton Protopopov
This patch extends the fib_lookup test suite by adding a few test cases for each IP family to test the new BPF_FIB_LOOKUP_MARK flag to the bpf_fib_lookup: * Test destination IP address selection with and without a mark and/or the BPF_FIB_LOOKUP_MARK flag set Signed-off-by: Anton Protopopov <aspsk@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240326101742.17421-3-aspsk@isovalent.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28bpf: Add support for passing mark with bpf_fib_lookupAnton Protopopov
Extend the bpf_fib_lookup() helper by making it to utilize mark if the BPF_FIB_LOOKUP_MARK flag is set. In order to pass the mark the four bytes of struct bpf_fib_lookup are used, shared with the output-only smac/dmac fields. Signed-off-by: Anton Protopopov <aspsk@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: David Ahern <dsahern@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240326101742.17421-2-aspsk@isovalent.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-03-28Merge branch 'ravb-support-describing-the-mdio-bus'Jakub Kicinski
Niklas Söderlund says: ==================== ravb: Support describing the MDIO bus This series adds support to the binding and driver of the Renesas Ethernet AVB to described the MDIO bus. Currently the driver uses the OF node of the device itself when registering the MDIO bus. This forces any MDIO bus properties the MDIO core should react on to be set on the device OF node. This is confusing and none of the MDIO bus properties are described in the Ethernet AVB bindings. Patch 1/2 extends the bindings with an optional mdio child-node to the device that can be used to contain the MDIO bus settings. While patch 2/2 changes the driver to use this node (if present) when registering the MDIO bus. If the new optional mdio child-node is not present the driver fallback to the old behavior and uses the device OF node like before. This change is fully backward compatible with existing usage of the bindings. ==================== Link: https://lore.kernel.org/r/20240325153451.2366083-1-niklas.soderlund+renesas@ragnatech.se Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28ravb: Add support for an optional MDIO modeNiklas Söderlund
The driver used the DT node of the device itself when registering the MDIO bus. While this works, it creates a problem: it forces any MDIO bus properties to also be set on the devices DT node. This mixes the properties of two distinctly different things and is confusing. This change adds support for an optional mdio node to be defined as a child to the device DT node. The child node can then be used to describe MDIO bus properties that the MDIO core can act on when registering the bus. If no mdio child node is found the driver fallback to the old behavior and register the MDIO bus using the device DT node. This change is backward compatible with old bindings in use. Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20240325153451.2366083-3-niklas.soderlund+renesas@ragnatech.se Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28dt-bindings: net: renesas,etheravb: Add optional MDIO bus nodeNiklas Söderlund
The Renesas Ethernet AVB bindings do not allow the MDIO bus to be described. This has not been needed as only a single PHY is supported and no MDIO bus properties have been needed. Add an optional mdio node to the binding which allows the MDIO bus to be described and allow bus properties to be set. Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20240325153451.2366083-2-niklas.soderlund+renesas@ragnatech.se Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28Merge branch 'doc-netlink-specs-add-vlan-support'Jakub Kicinski
Hangbin Liu says: ==================== doc/netlink/specs: Add vlan support Add vlan support in rt_link spec. ==================== Link: https://lore.kernel.org/r/20240327123130.1322921-1-liuhangbin@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28doc/netlink/specs: Add vlan attr in rt_link specHangbin Liu
With command: # ./tools/net/ynl/cli.py \ --spec Documentation/netlink/specs/rt_link.yaml \ --do getlink --json '{"ifname": "eno1.2"}' --output-json | \ jq -C '.linkinfo' Before: Exception: No message format for 'vlan' in sub-message spec 'linkinfo-data-msg' After: { "kind": "vlan", "data": { "protocol": "8021q", "id": 2, "flag": { "flags": [ "reorder-hdr" ], "mask": "0xffffffff" }, "egress-qos": { "mapping": [ { "from": 1, "to": 2 }, { "from": 4, "to": 4 } ] } } } Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Link: https://lore.kernel.org/r/20240327123130.1322921-3-liuhangbin@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28ynl: support hex display_hint for integerHangbin Liu
Some times it would be convenient to read the integer as hex, like mask values. Suggested-by: Donald Hunter <donald.hunter@gmail.com> Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Link: https://lore.kernel.org/r/20240327123130.1322921-2-liuhangbin@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28Merge branch 'selftests-fixes-for-kernel-ci'Jakub Kicinski
Petr Machata says: ==================== selftests: Fixes for kernel CI As discussed on the bi-weekly call on Jan 30, and in mailing around kernel CI effort, some changes are desirable in the suite of forwarding selftests the better to work with the CI tooling. Namely: - The forwarding selftests use a configuration file where names of interfaces are defined and various variables can be overridden. There is also forwarding.config.sample that users can use as a template to refer to when creating the config file. What happens a fair bit is that users either do not know about this at all, or simply forget, and are confused by cryptic failures about interfaces that cannot be created. In patches #1 - #3 have lib.sh just be the single source of truth with regards to which variables exist. That includes the topology variables which were previously only in the sample file, and any "tweak variables", such as what tools to use, sleep times, etc. forwarding.config.sample then becomes just a placeholder with a couple examples. Unless specific HW should be exercised, or specific tools used, the defaults are usually just fine. - Several net/forwarding/ selftests (and one net/ one) cannot be run on veth pairs, they need an actual HW interface to run on. They are generic in the sense that any capable HW should pass them, which is why they have been put to net/forwarding/ as opposed to drivers/net/, but they do not generalize to veth. The fact that these tests are in net/forwarding/, but still complaining when run, is confusing. In patches #4 - #6 move these tests to a new directory drivers/net/hw. - The following patches extend the codebase to handle well test results other than pass and fail. Patch #7 is preparatory. It converts several log_test_skip to XFAIL, so that tests do not spuriously end up returning non-0 when they are not supposed to. In patches #8 - #10, introduce some missing ksft constants, then support having those constants in RET, and then finally in EXIT_STATUS. - The traffic scheduler tests generate a large amount of network traffic to test the behavior of the scheduler. This demands a relatively high-performance computer. On slow machines, such as with a debugging kernel, the test would spuriously fail. It can still be useful to "go through the motions" though, to possibly catch bugs in setup of the scheduler graph and passing packets around. Thus we still want to run the tests, just with lowered demands. To that end, in patches #11 - #12, introduce an environment variable KSFT_MACHINE_SLOW, with obvious meaning. Tests can then make checks more lenient, such as mark failures as XFAIL. A helper, xfail_on_slow, is provided to mark performance-sensitive parts of the selftest. - In patch #13, use a similar mechanism to mark a NH group stats selftest to XFAIL HW stats tests when run on VETH pairs. - All these changes complicate the hitherto straightforward logging and checking logic, so in patch #14, add a selftest that checks this functionality in lib.sh. v1 (vs. an RFC circulated through linux-kselftest): - Patch #9: - Clarify intended usage by s/set_ret/ret_set_ksft_status/, s/nret/ksft_status/ ==================== Link: https://lore.kernel.org/r/cover.1711464583.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28selftests: forwarding: Add a test for testing lib.sh functionalityPetr Machata
Rerunning various scenarios to make sure lib.sh changes do not impact the observable behavior is no fun. Add a selftest at least for the bare basics -- the mechanics of setting RET, retmsg, and EXIT_STATUS. Since the selftest itself uses lib.sh, it would be possible to break lib.sh in such a way that invalidates result of the selftest. Since the metatest only uses the bare basics (just pass/fail), hopefully such fundamental breakages would be noticed. Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/6d25cedbf2d4b83614944809a34fe023fbe8db38.1711464583.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28selftests: forwarding: router_mpath_nh_lib: Don't skip, xfail on vethPetr Machata
When the NH group stats tests are currently run on a veth topology, the HW-stats leg of each test is SKIP'ped. But kernel networking CI interprets skips as a sign that tooling is missing, and prompts maintainer investigation. Lack of capability to pass a test should be expressed as XFAIL. Selftests that require HW should normally be put in drivers/net/hw, but doing so for the NH counter selftests would just lead to a lot of duplicity. So instead, introduce a helper, xfail_on_veth(), which can be used to mark selftests that should XFAIL instead of FAILing when run on a veth topology. On non-veth topology, they don't do anything. Use the helper in the HW-stats part of router_mpath_nh_lib selftest. Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/15f0ab9637aa0497f164ec30e83c1c8f53d53719.1711464583.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28selftests: forwarding: Mark performance-sensitive testsPetr Machata
When run on a slow machine, the scheduler traffic tests can be expected to fail, and should be reported as XFAIL in that case. Therefore run these tests through the perf_sensitive wrapper. Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/9a357f8cf34f5ececac08d43a3eb023008996035.1711464583.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28selftests: forwarding: Support for performance sensitive testsPetr Machata
Several tests in the suite use large amounts of traffic to e.g. cause congestion and evaluate RED or shaper performance. These tests will not run well on a slow machine, be it one with heavy debug kernel, or a VM, or e.g. a single-board computer. Allow users to specify an environment variable, KSFT_MACHINE_SLOW=yes, to indicate that the tests are being run on one such machine. Performance sensitive tests can then use a new helper, xfail_on_slow(), to mark parts of the test that are sensitive to low-performance machines. The helper can be used to just mark the whole suite, like so: xfail_on_slow tests_run ... or, on the other side of the granularity spectrum, to override individual checks: xfail_on_slow check_err $? "Expected much, got little." Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/99a376a2d2ffdaeee7752b1910cb0c3ea5d80fbe.1711464583.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28selftests: forwarding: Convert log_test() to recognize RET valuesPetr Machata
In a previous patch, the interpretation of RET value was changed to mean the kselftest framework constant with the test outcome: $ksft_pass, $ksft_xfail, etc. Update log_test() to recognize the various possible RET values. Then have EXIT_STATUS track the RET value of the current test. This differs subtly from the way RET tracks the value: while for RET we want to recognize XFAIL as a separate status, for purposes of exit code, we want to to conflate XFAIL and PASS, because they both communicate non-failure. Thus add a new helper, ksft_exit_status_merge(). With this log_test_skip() and log_test_xfail() can be reexpressed as thin wrappers around log_test. Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/e5f807cb5476ab795fd14ac74da53a731a9fc432.1711464583.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28selftests: forwarding: Have RET track kselftest framework constantsPetr Machata
The variable RET keeps track of whether the test under execution has so far failed or not. Currently it works in binary fashion: zero means everything is fine, non-zero means something failed. log_test() then uses the value to given a human-readable message. In order to allow log_test() to report skips and xfails, the semantics of RET need to be more fine-grained. Therefore have RET value be one of kselftest framework constants: $ksft_fail, $ksft_xfail, etc. The current logic in check_err() is such that first non-zero value of RET trumps all those that follow. But that is not right when RET has more fine-grained value semantics. Different outcomes have different weights. The results of PASS and XFAIL are mostly the same: they both communicate a test that did not go wrong. SKIP communicates lack of tooling, which the user should go and try to fix, and as such should not be overridden by the passes. So far, the higher-numbered statuses can be considered weightier. But FAIL should be the weightiest. Add a helper, ksft_status_merge(), which merges two statuses in a way that respects the above conditions. Express it in a generic manner, because exit status merge is subtly different, and we want to reuse the same logic. Use the new helper when setting RET in check_err(). Re-express check_fail() in terms of check_err() to avoid duplication. Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/7dfff51cc925c7a3ac879b9050a0d6a327c8d21f.1711464583.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-03-28selftests: lib: Define more kselftest exit codesPetr Machata
The following patches will operate with more exit codes besides ksft_skip. Add them here. Additionally, move a duplicated skip exit code definition from forwarding/tc_tunnel_key.sh. Keep a similar duplicate in forwarding/devlink_lib.sh, because even though lib.sh will have been sourced in all cases where devlink_lib is, the inclusion is not visible in the file itself, and relying on it would be confusing. Cc: Davide Caratti <dcaratti@redhat.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Link: https://lore.kernel.org/r/545a03046c7aca0628a51a389a9b81949ab288ce.1711464583.git.petrm@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>