summaryrefslogtreecommitdiff
path: root/tools/lib/bpf/bpf.c
AgeCommit message (Collapse)Author
2020-08-06tools/bpf: Support new uapi for map element bpf iteratorYonghong Song
Previous commit adjusted kernel uapi for map element bpf iterator. This patch adjusted libbpf API due to uapi change. bpftool and bpf_iter selftests are also changed accordingly. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200805055058.1457623-1-yhs@fb.com
2020-08-01libbpf: Add bpf_link detach APIsAndrii Nakryiko
Add low-level bpf_link_detach() API. Also add higher-level bpf_link__detach() one. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200731182830.286260-3-andriin@fb.com
2020-07-25tools/libbpf: Add support for bpf map element iteratorYonghong Song
Add map_fd to bpf_iter_attach_opts and flags to bpf_link_create_opts. Later on, bpftool or selftest will be able to create a bpf map element iterator by passing map_fd to the kernel during link creation time. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200723184117.590673-1-yhs@fb.com
2020-05-09tools/libbpf: Add bpf_iter supportYonghong Song
Two new libbpf APIs are added to support bpf_iter: - bpf_program__attach_iter Given a bpf program and additional parameters, which is none now, returns a bpf_link. - bpf_iter_create syscall level API to create a bpf iterator. The macro BPF_SEQ_PRINTF are also introduced. The format looks like: BPF_SEQ_PRINTF(seq, "task id %d\n", pid); This macro can help bpf program writers with nicer bpf_seq_printf syntax similar to the kernel one. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200509175917.2476936-1-yhs@fb.com
2020-05-01libbpf: Add support for command BPF_ENABLE_STATSSong Liu
bpf_enable_stats() is added to enable given stats. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200430071506.1408910-3-songliubraving@fb.com
2020-04-28libbpf: Add low-level APIs for new bpf_link commandsAndrii Nakryiko
Add low-level API calls for bpf_link_get_next_id() and bpf_link_get_fd_by_id(). Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200429001614.1544-6-andriin@fb.com
2020-03-30libbpf: Add support for bpf_link-based cgroup attachmentAndrii Nakryiko
Add bpf_program__attach_cgroup(), which uses BPF_LINK_CREATE subcommand to create an FD-based kernel bpf_link. Also add low-level bpf_link_create() API. If expected_attach_type is not specified explicitly with bpf_program__set_expected_attach_type(), libbpf will try to determine proper attach type from BPF program's section definition. Also add support for bpf_link's underlying BPF program replacement: - unconditional through high-level bpf_link__update_program() API; - cmpxchg-like with specifying expected current BPF program through low-level bpf_link_update() API. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200330030001.2312810-4-andriin@fb.com
2020-03-30tools/libbpf: Add support for BPF_PROG_TYPE_LSMKP Singh
Since BPF_PROG_TYPE_LSM uses the same attaching mechanism as BPF_PROG_TYPE_TRACING, the common logic is refactored into a static function bpf_program__attach_btf_id. A new API call bpf_program__attach_lsm is still added to avoid userspace conflicts if this ever changes in the future. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Brendan Jackman <jackmanb@google.com> Reviewed-by: Florent Revest <revest@google.com> Reviewed-by: James Morris <jamorris@linux.microsoft.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200329004356.27286-7-kpsingh@chromium.org
2020-01-22libbpf: Add support for program extensionsAlexei Starovoitov
Add minimal support for program extensions. bpf_object_open_opts() needs to be called with attach_prog_fd = target_prog_fd and BPF program extension needs to have in .c file section definition like SEC("freplace/func_to_be_replaced"). libbpf will search for "func_to_be_replaced" in the target_prog_fd's BTF and will pass it in attach_btf_id to the kernel. This approach works for tests, but more compex use case may need to request function name (and attach_btf_id that kernel sees) to be more dynamic. Such API will be added in future patches. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20200121005348.2769920-3-ast@kernel.org
2020-01-16libbpf: Fix unneeded extra initialization in bpf_map_batch_commonBrian Vazquez
bpf_attr doesn't required to be declared with '= {}' as memset is used in the code. Fixes: 2ab3d86ea1859 ("libbpf: Add libbpf support to batch ops") Reported-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Brian Vazquez <brianvv@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200116045918.75597-1-brianvv@google.com
2020-01-15libbpf: Add libbpf support to batch opsYonghong Song
Added four libbpf API functions to support map batch operations: . int bpf_map_delete_batch( ... ) . int bpf_map_lookup_batch( ... ) . int bpf_map_lookup_and_delete_batch( ... ) . int bpf_map_update_batch( ... ) Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200115184308.162644-8-brianvv@google.com
2020-01-10libbpf: Poison kernel-only integer typesAndrii Nakryiko
It's been a recurring issue with types like u32 slipping into libbpf source code accidentally. This is not detected during builds inside kernel source tree, but becomes a compilation error in libbpf's Github repo. Libbpf is supposed to use only __{s,u}{8,16,32,64} typedefs, so poison {s,u}{8,16,32,64} explicitly in every .c file. Doing that in a bit more centralized way, e.g., inside libbpf_internal.h breaks selftests, which are both using kernel u32 and libbpf_internal.h. This patch also fixes a new u32 occurence in libbpf.c, added recently. Fixes: 590a00888250 ("bpf: libbpf: Add STRUCT_OPS support") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20200110181916.271446-1-andriin@fb.com
2020-01-09bpf: libbpf: Add STRUCT_OPS supportMartin KaFai Lau
This patch adds BPF STRUCT_OPS support to libbpf. The only sec_name convention is SEC(".struct_ops") to identify the struct_ops implemented in BPF, e.g. To implement a tcp_congestion_ops: SEC(".struct_ops") struct tcp_congestion_ops dctcp = { .init = (void *)dctcp_init, /* <-- a bpf_prog */ /* ... some more func prts ... */ .name = "bpf_dctcp", }; Each struct_ops is defined as a global variable under SEC(".struct_ops") as above. libbpf creates a map for each variable and the variable name is the map's name. Multiple struct_ops is supported under SEC(".struct_ops"). In the bpf_object__open phase, libbpf will look for the SEC(".struct_ops") section and find out what is the btf-type the struct_ops is implementing. Note that the btf-type here is referring to a type in the bpf_prog.o's btf. A "struct bpf_map" is added by bpf_object__add_map() as other maps do. It will then collect (through SHT_REL) where are the bpf progs that the func ptrs are referring to. No btf_vmlinux is needed in the open phase. In the bpf_object__load phase, the map-fields, which depend on the btf_vmlinux, are initialized (in bpf_map__init_kern_struct_ops()). It will also set the prog->type, prog->attach_btf_id, and prog->expected_attach_type. Thus, the prog's properties do not rely on its section name. [ Currently, the bpf_prog's btf-type ==> btf_vmlinux's btf-type matching process is as simple as: member-name match + btf-kind match + size match. If these matching conditions fail, libbpf will reject. The current targeting support is "struct tcp_congestion_ops" which most of its members are function pointers. The member ordering of the bpf_prog's btf-type can be different from the btf_vmlinux's btf-type. ] Then, all obj->maps are created as usual (in bpf_object__create_maps()). Once the maps are created and prog's properties are all set, the libbpf will proceed to load all the progs. bpf_map__attach_struct_ops() is added to register a struct_ops map to a kernel subsystem. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200109003514.3856730-1-kafai@fb.com
2019-12-19libbpf: Introduce bpf_prog_attach_xattrAndrey Ignatov
Introduce a new bpf_prog_attach_xattr function that, in addition to program fd, target fd and attach type, accepts an extendable struct bpf_prog_attach_opts. bpf_prog_attach_opts relies on DECLARE_LIBBPF_OPTS macro to maintain backward and forward compatibility and has the following "optional" attach attributes: * existing attach_flags, since it's not required when attaching in NONE mode. Even though it's quite often used in MULTI and OVERRIDE mode it seems to be a good idea to reduce number of arguments to bpf_prog_attach_xattr; * newly introduced attribute of BPF_PROG_ATTACH command: replace_prog_fd that is fd of previously attached cgroup-bpf program to replace if BPF_F_REPLACE flag is used. The new function is named to be consistent with other xattr-functions (bpf_prog_test_run_xattr, bpf_create_map_xattr, bpf_load_program_xattr). The struct bpf_prog_attach_opts is supposed to be used with DECLARE_LIBBPF_OPTS macro. Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/bd6e0732303eb14e4b79cb128268d9e9ad6db208.1576741281.git.rdna@fb.com
2019-11-15libbpf: Add support for attaching BPF programs to other BPF programsAlexei Starovoitov
Extend libbpf api to pass attach_prog_fd into bpf_object__open. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-19-ast@kernel.org
2019-11-07libbpf: Fix potential overflow issueAndrii Nakryiko
Fix a potential overflow issue found by LGTM analysis, based on Github libbpf source code. Fixes: 3d65014146c6 ("bpf: libbpf: Add btf_line_info support to libbpf") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-3-andriin@fb.com
2019-10-31libbpf: Add support for prog_tracingAlexei Starovoitov
Cleanup libbpf from expected_attach_type == attach_btf_id hack and introduce BPF_PROG_TYPE_TRACING. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191030223212.953010-3-ast@kernel.org
2019-10-17libbpf: Auto-detect btf_id of BTF-based raw_tracepointsAlexei Starovoitov
It's a responsiblity of bpf program author to annotate the program with SEC("tp_btf/name") where "name" is a valid raw tracepoint. The libbpf will try to find "name" in vmlinux BTF and error out in case vmlinux BTF is not available or "name" is not found. If "name" is indeed a valid raw tracepoint then in-kernel BTF will have "btf_trace_##name" typedef that points to function prototype of that raw tracepoint. BTF description captures exact argument the kernel C code is passing into raw tracepoint. The kernel verifier will check the types while loading bpf program. libbpf keeps BTF type id in expected_attach_type, but since kernel ignores this attribute for tracing programs copy it into attach_btf_id attribute before loading. Later the kernel will use prog->attach_btf_id to select raw tracepoint during bpf_raw_tracepoint_open syscall command. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191016032505.2089704-6-ast@kernel.org
2019-08-20libbpf: add bpf_btf_get_next_id() to cycle through BTF objectsQuentin Monnet
Add an API function taking a BTF object id and providing the id of the next BTF object in the kernel. This can be used to list all BTF objects loaded on the system. v2: - Rebase on top of Andrii's changes regarding libbpf versioning. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-08-20libbpf: refactor bpf_*_get_next_id() functionsQuentin Monnet
In preparation for the introduction of a similar function for retrieving the id of the next BTF object, consolidate the code from bpf_prog_get_next_id() and bpf_map_get_next_id() in libbpf. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-06-18libbpf: add common min/max macro to libbpf_internal.hAndrii Nakryiko
Multiple files in libbpf redefine their own definitions for min/max. Let's define them in libbpf_internal.h and use those everywhere. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-24libbpf: add "prog_flags" to bpf_program/bpf_prog_load_attr/bpf_load_program_attrJiong Wang
libbpf doesn't allow passing "prog_flags" during bpf program load in a couple of load related APIs, "bpf_load_program_xattr", "load_program" and "bpf_prog_load_xattr". It makes sense to allow passing "prog_flags" which is useful for customizing program loading. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-05tools/bpf: fix perf build error with uClibc (seen on ARC)Vineet Gupta
When build perf for ARC recently, there was a build failure due to lack of __NR_bpf. | Auto-detecting system features: | | ... get_cpuid: [ OFF ] | ... bpf: [ on ] | | # error __NR_bpf not defined. libbpf does not support your arch. ^~~~~ | bpf.c: In function 'sys_bpf': | bpf.c:66:17: error: '__NR_bpf' undeclared (first use in this function) | return syscall(__NR_bpf, cmd, attr, size); | ^~~~~~~~ | sys_bpf Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-11libbpf: add support for ctx_{size, }_{in, out} in BPF_PROG_TEST_RUNStanislav Fomichev
Support recently introduced input/output context for test runs. We extend only bpf_prog_test_run_xattr. bpf_prog_test_run is unextendable and left as is. Signed-off-by: Stanislav Fomichev <sdf@google.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-10bpf, bpftool: fix a few ubsan warningsYonghong Song
The issue is reported at https://github.com/libbpf/libbpf/issues/28. Basically, per C standard, for void *memcpy(void *dest, const void *src, size_t n) if "dest" or "src" is NULL, regardless of whether "n" is 0 or not, the result of memcpy is undefined. clang ubsan reported three such instances in bpf.c with the following pattern: memcpy(dest, 0, 0). Although in practice, no known compiler will cause issues when copy size is 0. Let us still fix the issue to silence ubsan warnings. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-09bpf, libbpf: support global data/bss/rodata sectionsDaniel Borkmann
This work adds BPF loader support for global data sections to libbpf. This allows to write BPF programs in more natural C-like way by being able to define global variables and const data. Back at LPC 2018 [0] we presented a first prototype which implemented support for global data sections by extending BPF syscall where union bpf_attr would get additional memory/size pair for each section passed during prog load in order to later add this base address into the ldimm64 instruction along with the user provided offset when accessing a variable. Consensus from LPC was that for proper upstream support, it would be more desirable to use maps instead of bpf_attr extension as this would allow for introspection of these sections as well as potential live updates of their content. This work follows this path by taking the following steps from loader side: 1) In bpf_object__elf_collect() step we pick up ".data", ".rodata", and ".bss" section information. 2) If present, in bpf_object__init_internal_map() we add maps to the obj's map array that corresponds to each of the present sections. Given section size and access properties can differ, a single entry array map is created with value size that is corresponding to the ELF section size of .data, .bss or .rodata. These internal maps are integrated into the normal map handling of libbpf such that when user traverses all obj maps, they can be differentiated from user-created ones via bpf_map__is_internal(). In later steps when we actually create these maps in the kernel via bpf_object__create_maps(), then for .data and .rodata sections their content is copied into the map through bpf_map_update_elem(). For .bss this is not necessary since array map is already zero-initialized by default. Additionally, for .rodata the map is frozen as read-only after setup, such that neither from program nor syscall side writes would be possible. 3) In bpf_program__collect_reloc() step, we record the corresponding map, insn index, and relocation type for the global data. 4) And last but not least in the actual relocation step in bpf_program__relocate(), we mark the ldimm64 instruction with src_reg = BPF_PSEUDO_MAP_VALUE where in the first imm field the map's file descriptor is stored as similarly done as in BPF_PSEUDO_MAP_FD, and in the second imm field (as ldimm64 is 2-insn wide) we store the access offset into the section. Given these maps have only single element ldimm64's off remains zero in both parts. 5) On kernel side, this special marked BPF_PSEUDO_MAP_VALUE load will then store the actual target address in order to have a 'map-lookup'-free access. That is, the actual map value base address + offset. The destination register in the verifier will then be marked as PTR_TO_MAP_VALUE, containing the fixed offset as reg->off and backing BPF map as reg->map_ptr. Meaning, it's treated as any other normal map value from verification side, only with efficient, direct value access instead of actual call to map lookup helper as in the typical case. Currently, only support for static global variables has been added, and libbpf rejects non-static global variables from loading. This can be lifted until we have proper semantics for how BPF will treat multi-object BPF loads. From BTF side, libbpf will set the value type id of the types corresponding to the ".bss", ".data" and ".rodata" names which LLVM will emit without the object name prefix. The key type will be left as zero, thus making use of the key-less BTF option in array maps. Simple example dump of program using globals vars in each section: # bpftool prog [...] 6784: sched_cls name load_static_dat tag a7e1291567277844 gpl loaded_at 2019-03-11T15:39:34+0000 uid 0 xlated 1776B jited 993B memlock 4096B map_ids 2238,2237,2235,2236,2239,2240 # bpftool map show id 2237 2237: array name test_glo.bss flags 0x0 key 4B value 64B max_entries 1 memlock 4096B # bpftool map show id 2235 2235: array name test_glo.data flags 0x0 key 4B value 64B max_entries 1 memlock 4096B # bpftool map show id 2236 2236: array name test_glo.rodata flags 0x80 key 4B value 96B max_entries 1 memlock 4096B # bpftool prog dump xlated id 6784 int load_static_data(struct __sk_buff * skb): ; int load_static_data(struct __sk_buff *skb) 0: (b7) r6 = 0 ; test_reloc(number, 0, &num0); 1: (63) *(u32 *)(r10 -4) = r6 2: (bf) r2 = r10 ; int load_static_data(struct __sk_buff *skb) 3: (07) r2 += -4 ; test_reloc(number, 0, &num0); 4: (18) r1 = map[id:2238] 6: (18) r3 = map[id:2237][0]+0 <-- direct addr in .bss area 8: (b7) r4 = 0 9: (85) call array_map_update_elem#100464 10: (b7) r1 = 1 ; test_reloc(number, 1, &num1); [...] ; test_reloc(string, 2, str2); 120: (18) r8 = map[id:2237][0]+16 <-- same here at offset +16 122: (18) r1 = map[id:2239] 124: (18) r3 = map[id:2237][0]+16 126: (b7) r4 = 0 127: (85) call array_map_update_elem#100464 128: (b7) r1 = 120 ; str1[5] = 'x'; 129: (73) *(u8 *)(r9 +5) = r1 ; test_reloc(string, 3, str1); 130: (b7) r1 = 3 131: (63) *(u32 *)(r10 -4) = r1 132: (b7) r9 = 3 133: (bf) r2 = r10 ; int load_static_data(struct __sk_buff *skb) 134: (07) r2 += -4 ; test_reloc(string, 3, str1); 135: (18) r1 = map[id:2239] 137: (18) r3 = map[id:2235][0]+16 <-- direct addr in .data area 139: (b7) r4 = 0 140: (85) call array_map_update_elem#100464 141: (b7) r1 = 111 ; __builtin_memcpy(&str2[2], "hello", sizeof("hello")); 142: (73) *(u8 *)(r8 +6) = r1 <-- further access based on .bss data 143: (b7) r1 = 108 144: (73) *(u8 *)(r8 +5) = r1 [...] For Cilium use-case in particular, this enables migrating configuration constants from Cilium daemon's generated header defines into global data sections such that expensive runtime recompilations with LLVM can be avoided altogether. Instead, the ELF file becomes effectively a "template", meaning, it is compiled only once (!) and the Cilium daemon will then rewrite relevant configuration data from the ELF's .data or .rodata sections directly instead of recompiling the program. The updated ELF is then loaded into the kernel and atomically replaces the existing program in the networking datapath. More info in [0]. Based upon recent fix in LLVM, commit c0db6b6bd444 ("[BPF] Don't fail for static variables"). [0] LPC 2018, BPF track, "ELF relocation for static data in BPF", http://vger.kernel.org/lpc-bpf2018.html#session-3 Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-04-04libbpf: teach libbpf about log_level bit 2Alexei Starovoitov
Allow bpf_prog_load_xattr() to specify log_level for program loading. Teach libbpf to accept log_level with bit 2 set. Increase default BPF_LOG_BUF_SIZE from 256k to 16M. There is no downside to increase it to a maximum allowed by old kernels. Existing 256k limit caused ENOSPC errors and users were not able to see verifier error which is printed at the end of the verifier log. If ENOSPC is hit, double the verifier log and try again to capture the verifier error. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-02-14tools/bpf: replace bzero with memsetAndrii Nakryiko
bzero() call is deprecated and superseded by memset(). Signed-off-by: Andrii Nakryiko <andriin@fb.com> Reported-by: David Laight <david.laight@aculab.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-02-07tools/bpf: add log_level to bpf_load_program_attrYonghong Song
The kernel verifier has three levels of logs: 0: no logs 1: logs mostly useful > 1: verbose Current libbpf API functions bpf_load_program_xattr() and bpf_load_program() cannot specify log_level. The bcc, however, provides an interface for user to specify log_level 2 for verbose output. This patch added log_level into structure bpf_load_program_attr, so users, including bcc, can use bpf_load_program_xattr() to change log_level. The supported log_level is 0, 1, and 2. The bpf selftest test_sock.c is modified to enable log_level = 2. If the "verbose" in test_sock.c is changed to true, the test will output logs like below: $ ./test_sock func#0 @0 0: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_-1 0: (bf) r6 = r1 1: R1=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0) R10=fp0,call_-1 1: (61) r7 = *(u32 *)(r6 +28) invalid bpf_context access off=28 size=4 Test case: bind4 load with invalid access: src_ip6 .. [PASS] ... Test case: bind6 allow all .. [PASS] Summary: 16 PASSED, 0 FAILED Some test_sock tests are negative tests and verbose verifier log will be printed out as shown in the above. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-02-07tools/bpf: add missing strings.h includeAndrii Nakryiko
Few files in libbpf are using bzero() function (defined in strings.h header), but don't include corresponding header. When libbpf is added as a dependency to pahole, this undeterministically causes warnings on some machines: bpf.c:225:2: warning: implicit declaration of function 'bzero' [-Wimplicit-function-declaration] bzero(&attr, sizeof(attr)); ^~~~~ Signed-off-by: Andrii Nakryiko <andriin@fb.com> Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-02-01libbpf: introduce bpf_map_lookup_elem_flags()Alexei Starovoitov
Introduce int bpf_map_lookup_elem_flags(int fd, const void *key, void *value, __u64 flags) helper to lookup array/hash/cgroup_local_storage elements with BPF_F_LOCK flag. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-01-15bpf: libbpf: retry loading program on EAGAINLorenz Bauer
Commit c3494801cd17 ("bpf: check pending signals while verifying programs") makes it possible for the BPF_PROG_LOAD to fail with EAGAIN. Retry unconditionally in this case. Fixes: c3494801cd17 ("bpf: check pending signals while verifying programs") Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-12-09bpf: libbpf: Add btf_line_info support to libbpfMartin KaFai Lau
This patch adds bpf_line_info support to libbpf: 1) Parsing the line_info sec from ".BTF.ext" 2) Relocating the line_info. If the main prog *_info relocation fails, it will ignore the remaining subprog line_info and continue. If the subprog *_info relocation fails, it will bail out. 3) BPF_PROG_LOAD a prog with line_info Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-09bpf: libbpf: Refactor and bug fix on the bpf_func_info loading logicMartin KaFai Lau
This patch refactor and fix a bug in the libbpf's bpf_func_info loading logic. The bug fix and refactoring are targeting the same commit 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") which is in the bpf-next branch. 1) In bpf_load_program_xattr(), it should retry when errno == E2BIG regardless of log_buf and log_buf_sz. This patch fixes it. 2) btf_ext__reloc_init() and btf_ext__reloc() are essentially the same except btf_ext__reloc_init() always has insns_cnt == 0. Hence, btf_ext__reloc_init() is removed. btf_ext__reloc() is also renamed to btf_ext__reloc_func_info() to get ready for the line_info support in the next patch. 3) Consolidate func_info section logic from "btf_ext_parse_hdr()", "btf_ext_validate_func_info()" and "btf_ext__new()" to a new function "btf_ext_copy_func_info()" such that similar logic can be reused by the later libbpf's line_info patch. 4) The next line_info patch will store line_info_cnt instead of line_info_len in the bpf_program because the kernel is taking line_info_cnt also. It will save a few "len" to "cnt" conversions and will also save some function args. Hence, this patch also makes bpf_program to store func_info_cnt instead of func_info_len. 5) btf_ext depends on btf. e.g. the func_info's type_id in ".BTF.ext" is not useful when ".BTF" is absent. This patch only init the obj->btf_ext pointer after it has successfully init the obj->btf pointer. This can avoid always checking "obj->btf && obj->btf_ext" together for accessing ".BTF.ext". Checking "obj->btf_ext" alone will do. 6) Move "struct btf_sec_func_info" from btf.h to btf.c. There is no external usage outside btf.c. Fixes: 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-04libbpf: add bpf_prog_test_run_xattrLorenz Bauer
Add a new function, which encourages safe usage of the test interface. bpf_prog_test_run continues to work as before, but should be considered unsafe. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-30bpf: Add BPF_F_ANY_ALIGNMENT.David Miller
Often we want to write tests cases that check things like bad context offset accesses. And one way to do this is to use an odd offset on, for example, a 32-bit load. This unfortunately triggers the alignment checks first on platforms that do not set CONFIG_EFFICIENT_UNALIGNED_ACCESS. So the test case see the alignment failure rather than what it was testing for. It is often not completely possible to respect the original intention of the test, or even test the same exact thing, while solving the alignment issue. Another option could have been to check the alignment after the context and other validations are performed by the verifier, but that is a non-trivial change to the verifier. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-21bpf: libbpf: remove map name retry from bpf_create_map_xattrStanislav Fomichev
Instead, check for a newly created caps.name bpf_object capability. If kernel doesn't support names, don't specify the attribute. See commit 23499442c319 ("bpf: libbpf: retry map creation without the name") for rationale. Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-11-20tools/bpf: add support to read .BTF.ext sectionsYonghong Song
The .BTF section is already available to encode types. These types can be used for map pretty print. The whole .BTF will be passed to the kernel as well for which kernel can verify and return to the user space for pretty print etc. The llvm patch at https://reviews.llvm.org/D53736 will generate .BTF section and one more section .BTF.ext. The .BTF.ext section encodes function type information and line information. Note that this patch set only supports function type info. The functionality is implemented in libbpf. The .BTF section can be directly loaded into the kernel, and the .BTF.ext section cannot. The loader may need to do some relocation and merging, similar to merging multiple code sections, before loading into the kernel. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20tools/bpf: add new fields for program load in lib/bpfYonghong Song
The new fields are added for program load in lib/bpf so application uses api bpf_load_program_xattr() is able to load program with btf and func_info data. This functionality will be used in next patch by bpf selftest test_btf. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-11-20bpf: libbpf: retry map creation without the nameStanislav Fomichev
Since commit 88cda1c9da02 ("bpf: libbpf: Provide basic API support to specify BPF obj name"), libbpf unconditionally sets bpf_attr->name for maps. Pre v4.14 kernels don't know about map names and return an error about unexpected non-zero data. Retry sys_bpf without a map name to cover older kernels. v2 changes: * check for errno == EINVAL as suggested by Daniel Borkmann Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-19selftests/bpf: add test cases for queue and stack mapsMauricio Vasquez B
test_maps: Tests that queue/stack maps are behaving correctly even in corner cases test_progs: Tests new ebpf helpers Signed-off-by: Mauricio Vasquez B <mauricio.vasquez@polito.it> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-10-08libbpf: relicense libbpf as LGPL-2.1 OR BSD-2-ClauseAlexei Starovoitov
libbpf is maturing as a library and gaining features that no other bpf libraries support (BPF Type Format, bpf to bpf calls, etc) Many Apache2 licensed projects (like bcc, bpftrace, gobpf, cilium, etc) would like to use libbpf, but cannot do this yet, since Apache Foundation explicitly states that LGPL is incompatible with Apache2. Hence let's relicense libbpf as dual license LGPL-2.1 or BSD-2-Clause, since BSD-2 is compatible with Apache2. Dual LGPL or Apache2 is invalid combination. Fix license mistake in Makefile as well. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrey Ignatov <rdna@fb.com> Acked-by: Arnaldo Carvalho de Melo <acme@kernel.org> Acked-by: Björn Töpel <bjorn.topel@intel.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: David Beckett <david.beckett@netronome.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Joe Stringer <joe@ovn.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Thomas Graf <tgraf@suug.ch> Acked-by: Roman Gushchin <guro@fb.com> Acked-by: Wang Nan <wangnan0@huawei.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-09-06tools/bpf: move bpf/lib netlink related functions into a new fileYonghong Song
There are no functionality change for this patch. In the subsequent patches, more netlink related library functions will be added and a separate file is better than cluttering bpf.c. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-08-11bpf: Test BPF_PROG_TYPE_SK_REUSEPORTMartin KaFai Lau
This patch add tests for the new BPF_PROG_TYPE_SK_REUSEPORT. The tests cover: - IPv4/IPv6 + TCP/UDP - TCP syncookie - TCP fastopen - Cases when the bpf_sk_select_reuseport() returning errors - Cases when the bpf prog returns SK_DROP - Values from sk_reuseport_md - outer_map => reuseport_array The test depends on commit 3eee1f75f2b9 ("bpf: fix bpf_skb_load_bytes_relative pkt length check") Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-24tools/bpf: sync kernel header bpf.h and add bpf_task_fd_query in libbpfYonghong Song
Sync kernel header bpf.h to tools/include/uapi/linux/bpf.h and implement bpf_task_fd_query() in libbpf. The test programs in samples/bpf and tools/testing/selftests/bpf, and later bpftool will use this libbpf function to query kernel. Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-05-23bpf: btf: Add tests for the btf uapi changesMartin KaFai Lau
This patch does the followings: 1. Modify libbpf and test_btf to reflect the uapi changes in btf 2. Add test for the btf_header changes 3. Add tests for array->index_type 4. Add err_str check to the tests 5. Fix a 4 bytes hole in "struct test #1" by swapping "m" and "n" Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-17libbpf: add ifindex to enable offload supportDavid Beckett
BPF programs currently can only be offloaded using iproute2. This patch will allow programs to be offloaded using libbpf calls. Signed-off-by: David Beckett <david.beckett@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-05-09bpf: btf: Tests for BPF_OBJ_GET_INFO_BY_FD and BPF_BTF_GET_FD_BY_IDMartin KaFai Lau
This patch adds test for BPF_BTF_GET_FD_BY_ID and the new btf_id/btf_key_id/btf_value_id in the "struct bpf_map_info". It also modifies the existing BPF_OBJ_GET_INFO_BY_FD test to reflect the new "struct bpf_btf_info". Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-04-19bpf: btf: Add BTF support to libbpfMartin KaFai Lau
If the ".BTF" elf section exists, libbpf will try to create a btf_fd (through BPF_BTF_LOAD). If that fails, it will still continue loading the bpf prog/map without the BTF. If the bpf_object has a BTF loaded, it will create a map with the btf_fd. libbpf will try to figure out the btf_key_id and btf_value_id of a map by finding the BTF type with name "<map_name>_key" and "<map_name>_value". If they cannot be found, it will continue without using the BTF. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Alexei Starovoitov <ast@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-31libbpf: Support expected_attach_type at prog loadAndrey Ignatov
Support setting `expected_attach_type` at prog load time in both `bpf/bpf.h` and `bpf/libbpf.h`. Since both headers already have API to load programs, new functions are added not to break backward compatibility for existing ones: * `bpf_load_program_xattr()` is added to `bpf/bpf.h`; * `bpf_prog_load_xattr()` is added to `bpf/libbpf.h`. Both new functions accept structures, `struct bpf_load_program_attr` and `struct bpf_prog_load_attr` correspondingly, where new fields can be added in the future w/o changing the API. Standard `_xattr` suffix is used to name the new API functions. Since `bpf_load_program_name()` is not used as heavily as `bpf_load_program()`, it was removed in favor of more generic `bpf_load_program_xattr()`. Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>