summaryrefslogtreecommitdiff
path: root/tools/lib
AgeCommit message (Collapse)Author
2019-12-04selftests/bpf: Bring back c++ include/link testStanislav Fomichev
Commit 5c26f9a78358 ("libbpf: Don't use cxx to test_libpf target") converted existing c++ test to c. We still want to include and link against libbpf from c++ code, so reinstate this test back, this time in a form of a selftest with a clear comment about its purpose. v2: * -lelf -> $(LDLIBS) (Andrii Nakryiko) Fixes: 5c26f9a78358 ("libbpf: Don't use cxx to test_libpf target") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191202215931.248178-1-sdf@google.com
2019-12-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf 2019-12-02 The following pull-request contains BPF updates for your *net* tree. We've added 10 non-merge commits during the last 6 day(s) which contain a total of 10 files changed, 60 insertions(+), 51 deletions(-). The main changes are: 1) Fix vmlinux BTF generation for binutils pre v2.25, from Stanislav Fomichev. 2) Fix libbpf global variable relocation to take symbol's st_value offset into account, from Andrii Nakryiko. 3) Fix libbpf build on powerpc where check_abi target fails due to different readelf output format, from Aurelien Jarno. 4) Don't set BPF insns RO for the case when they are JITed in order to avoid fragmenting the direct map, from Daniel Borkmann. 5) Fix static checker warning in btf_distill_func_proto() as well as a build error due to empty enum when BPF is compiled out, from Alexei Starovoitov. 6) Fix up generation of bpf_helper_defs.h for perf, from Arnaldo Carvalho de Melo. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-02libbpf: Fix readelf output parsing on powerpc with recent binutilsAurelien Jarno
On powerpc with recent versions of binutils, readelf outputs an extra field when dumping the symbols of an object file. For example: 35: 0000000000000838 96 FUNC LOCAL DEFAULT [<localentry>: 8] 1 btf_is_struct The extra "[<localentry>: 8]" prevents the GLOBAL_SYM_COUNT variable to be computed correctly and causes the check_abi target to fail. Fix that by looking for the symbol name in the last field instead of the 8th one. This way it should also cope with future extra fields. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/bpf/20191201195728.4161537-1-aurelien@aurel32.net
2019-11-27libbpf: Fix sym->st_value print on 32-bit archesAlexei Starovoitov
The st_value field is a 64-bit value and causing this error on 32-bit arches: In file included from libbpf.c:52: libbpf.c: In function 'bpf_program__record_reloc': libbpf_internal.h:59:22: error: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'Elf64_Addr' {aka 'const long long unsigned int'} [-Werror=format=] Fix it with (__u64) cast. Fixes: 1f8e2bcb2cd5 ("libbpf: Refactor relocation handling") Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-11-27libbpf: Fix up generation of bpf_helper_defs.hArnaldo Carvalho de Melo
$ make -C tools/perf build-test does, ends up with these two problems: make[3]: *** No rule to make target '/tmp/tmp.zq13cHILGB/perf-5.3.0/include/uapi/linux/bpf.h', needed by 'bpf_helper_defs.h'. Stop. make[3]: *** Waiting for unfinished jobs.... make[2]: *** [Makefile.perf:757: /tmp/tmp.zq13cHILGB/perf-5.3.0/tools/lib/bpf/libbpf.a] Error 2 make[2]: *** Waiting for unfinished jobs.... Because $(srcdir) points to the /tmp/tmp.zq13cHILGB/perf-5.3.0 directory and we need '/tools/ after that variable, and after fixing this then we get to another problem: /bin/sh: /home/acme/git/perf/tools/scripts/bpf_helpers_doc.py: No such file or directory make[3]: *** [Makefile:184: bpf_helper_defs.h] Error 127 make[3]: *** Deleting file 'bpf_helper_defs.h' LD /tmp/build/perf/libapi-in.o make[2]: *** [Makefile.perf:778: /tmp/build/perf/libbpf.a] Error 2 make[2]: *** Waiting for unfinished jobs.... Because this requires something outside the tools/ directories that gets collected into perf's detached tarballs, to fix it just add it to tools/perf/MANIFEST, which this patch does, now it works for that case and also for all these other cases. Fixes: e01a75c15969 ("libbpf: Move bpf_{helpers, helper_defs, endian, tracing}.h into libbpf") Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrii Nakryiko <andriin@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lkml.kernel.org/n/tip-4pnkg2vmdvq5u6eivc887wen@git.kernel.org Link: https://lore.kernel.org/bpf/20191126151045.GB19483@kernel.org
2019-11-27libbpf: Fix global variable relocationAndrii Nakryiko
Similarly to a0d7da26ce86 ("libbpf: Fix call relocation offset calculation bug"), relocations against global variables need to take into account referenced symbol's st_value, which holds offset into a corresponding data section (and, subsequently, offset into internal backing map). For static variables this offset is always zero and data offset is completely described by respective instruction's imm field. Convert a bunch of selftests to global variables. Previously they were relying on `static volatile` trick to ensure Clang doesn't inline static variables, which with global variables is not necessary anymore. Fixes: 393cdfbee809 ("libbpf: Support initialized global variables") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191127200651.1381348-1-andriin@fb.com
2019-11-27libbpf: Fix Makefile' libbpf symbol mismatch diagnosticAndrii Nakryiko
Fix Makefile's diagnostic diff output when there is LIBBPF_API-versioned symbols mismatch. Fixes: 1bd63524593b ("libbpf: handle symbol versioning properly for libbpf.a") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191127200134.1360660-1-andriin@fb.com
2019-11-26Merge branch 'locking-core-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Ingo Molnar: "The main changes in this cycle were: - A comprehensive rewrite of the robust/PI futex code's exit handling to fix various exit races. (Thomas Gleixner et al) - Rework the generic REFCOUNT_FULL implementation using atomic_fetch_* operations so that the performance impact of the cmpxchg() loops is mitigated for common refcount operations. With these performance improvements the generic implementation of refcount_t should be good enough for everybody - and this got confirmed by performance testing, so remove ARCH_HAS_REFCOUNT and REFCOUNT_FULL entirely, leaving the generic implementation enabled unconditionally. (Will Deacon) - Other misc changes, fixes, cleanups" * 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (27 commits) lkdtm: Remove references to CONFIG_REFCOUNT_FULL locking/refcount: Remove unused 'refcount_error_report()' function locking/refcount: Consolidate implementations of refcount_t locking/refcount: Consolidate REFCOUNT_{MAX,SATURATED} definitions locking/refcount: Move saturation warnings out of line locking/refcount: Improve performance of generic REFCOUNT_FULL code locking/refcount: Move the bulk of the REFCOUNT_FULL implementation into the <linux/refcount.h> header locking/refcount: Remove unused refcount_*_checked() variants locking/refcount: Ensure integer operands are treated as signed locking/refcount: Define constants for saturation and max refcount values futex: Prevent exit livelock futex: Provide distinct return value when owner is exiting futex: Add mutex around futex exit futex: Provide state handling for exec() as well futex: Sanitize exit state handling futex: Mark the begin of futex exit explicitly futex: Set task::futex_state to DEAD right after handling futex exit futex: Split futex_mm_release() for exit/exec exit/exec: Seperate mm_release() futex: Replace PF_EXITPIDONE with a state ...
2019-11-26Merge branch 'perf-core-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf updates from Ingo Molnar: "The main kernel side changes in this cycle were: - Various Intel-PT updates and optimizations (Alexander Shishkin) - Prohibit kprobes on Xen/KVM emulate prefixes (Masami Hiramatsu) - Add support for LSM and SELinux checks to control access to the perf syscall (Joel Fernandes) - Misc other changes, optimizations, fixes and cleanups - see the shortlog for details. There were numerous tooling changes as well - 254 non-merge commits. Here are the main changes - too many to list in detail: - Enhancements to core tooling infrastructure, perf.data, libperf, libtraceevent, event parsing, vendor events, Intel PT, callchains, BPF support and instruction decoding. - There were updates to the following tools: perf annotate perf diff perf inject perf kvm perf list perf maps perf parse perf probe perf record perf report perf script perf stat perf test perf trace - And a lot of other changes: please see the shortlog and Git log for more details" * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (279 commits) perf parse: Fix potential memory leak when handling tracepoint errors perf probe: Fix spelling mistake "addrees" -> "address" libtraceevent: Fix memory leakage in copy_filter_type libtraceevent: Fix header installation perf intel-bts: Does not support AUX area sampling perf intel-pt: Add support for decoding AUX area samples perf intel-pt: Add support for recording AUX area samples perf pmu: When using default config, record which bits of config were changed by the user perf auxtrace: Add support for queuing AUX area samples perf session: Add facility to peek at all events perf auxtrace: Add support for dumping AUX area samples perf inject: Cut AUX area samples perf record: Add aux-sample-size config term perf record: Add support for AUX area sampling perf auxtrace: Add support for AUX area sample recording perf auxtrace: Move perf_evsel__find_pmu() perf record: Add a function to test for kernel support for AUX area sampling perf tools: Add kernel AUX area sampling definitions perf/core: Make the mlock accounting simple again perf report: Jump to symbol source view from total cycles view ...
2019-11-25Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-nextLinus Torvalds
Pull networking updates from David Miller: "Another merge window, another pull full of stuff: 1) Support alternative names for network devices, from Jiri Pirko. 2) Introduce per-netns netdev notifiers, also from Jiri Pirko. 3) Support MSG_PEEK in vsock/virtio, from Matias Ezequiel Vara Larsen. 4) Allow compiling out the TLS TOE code, from Jakub Kicinski. 5) Add several new tracepoints to the kTLS code, also from Jakub. 6) Support set channels ethtool callback in ena driver, from Sameeh Jubran. 7) New SCTP events SCTP_ADDR_ADDED, SCTP_ADDR_REMOVED, SCTP_ADDR_MADE_PRIM, and SCTP_SEND_FAILED_EVENT. From Xin Long. 8) Add XDP support to mvneta driver, from Lorenzo Bianconi. 9) Lots of netfilter hw offload fixes, cleanups and enhancements, from Pablo Neira Ayuso. 10) PTP support for aquantia chips, from Egor Pomozov. 11) Add UDP segmentation offload support to igb, ixgbe, and i40e. From Josh Hunt. 12) Add smart nagle to tipc, from Jon Maloy. 13) Support L2 field rewrite by TC offloads in bnxt_en, from Venkat Duvvuru. 14) Add a flow mask cache to OVS, from Tonghao Zhang. 15) Add XDP support to ice driver, from Maciej Fijalkowski. 16) Add AF_XDP support to ice driver, from Krzysztof Kazimierczak. 17) Support UDP GSO offload in atlantic driver, from Igor Russkikh. 18) Support it in stmmac driver too, from Jose Abreu. 19) Support TIPC encryption and auth, from Tuong Lien. 20) Introduce BPF trampolines, from Alexei Starovoitov. 21) Make page_pool API more numa friendly, from Saeed Mahameed. 22) Introduce route hints to ipv4 and ipv6, from Paolo Abeni. 23) Add UDP segmentation offload to cxgb4, Rahul Lakkireddy" * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1857 commits) libbpf: Fix usage of u32 in userspace code mm: Implement no-MMU variant of vmalloc_user_node_flags slip: Fix use-after-free Read in slip_open net: dsa: sja1105: fix sja1105_parse_rgmii_delays() macvlan: schedule bc_work even if error enetc: add support Credit Based Shaper(CBS) for hardware offload net: phy: add helpers phy_(un)lock_mdio_bus mdio_bus: don't use managed reset-controller ax88179_178a: add ethtool_op_get_ts_info() mlxsw: spectrum_router: Fix use of uninitialized adjacency index mlxsw: spectrum_router: After underlay moves, demote conflicting tunnels bpf: Simplify __bpf_arch_text_poke poke type handling bpf: Introduce BPF_TRACE_x helper for the tracing tests bpf: Add bpf_jit_blinding_enabled for !CONFIG_BPF_JIT bpf, testing: Add various tail call test cases bpf, x86: Emit patchable direct jump as tail call bpf: Constant map key tracking for prog array pokes bpf: Add poke dependency tracking for prog array maps bpf: Add initial poke descriptor table for jit images bpf: Move owner type, jited info into array auxiliary data ...
2019-11-25Merge tag 'printk-for-5.5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/pmladek/printk Pull printk updates from Petr Mladek: - Allow to print symbolic error names via new %pe modifier. - Use pr_warn() instead of the remaining pr_warning() calls. Fix formatting of the related lines. - Add VSPRINTF entry to MAINTAINERS. * tag 'printk-for-5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/pmladek/printk: (32 commits) checkpatch: don't warn about new vsprintf pointer extension '%pe' MAINTAINERS: Add VSPRINTF tools lib api: Renaming pr_warning to pr_warn ASoC: samsung: Use pr_warn instead of pr_warning lib: cpu_rmap: Use pr_warn instead of pr_warning trace: Use pr_warn instead of pr_warning dma-debug: Use pr_warn instead of pr_warning vgacon: Use pr_warn instead of pr_warning fs: afs: Use pr_warn instead of pr_warning sh/intc: Use pr_warn instead of pr_warning scsi: Use pr_warn instead of pr_warning platform/x86: intel_oaktrail: Use pr_warn instead of pr_warning platform/x86: asus-laptop: Use pr_warn instead of pr_warning platform/x86: eeepc-laptop: Use pr_warn instead of pr_warning oprofile: Use pr_warn instead of pr_warning of: Use pr_warn instead of pr_warning macintosh: Use pr_warn instead of pr_warning idsn: Use pr_warn instead of pr_warning ide: Use pr_warn instead of pr_warning crypto: n2: Use pr_warn instead of pr_warning ...
2019-11-25libbpf: Fix usage of u32 in userspace codeAndrii Nakryiko
u32 is not defined for libbpf when compiled outside of kernel sources (e.g., in Github projection). Use __u32 instead. Fixes: b8c54ea455dc ("libbpf: Add support to attach to fentry/fexit tracing progs") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191125212948.1163343-1-andriin@fb.com
2019-11-24bpf: Introduce BPF_TRACE_x helper for the tracing testsMartin KaFai Lau
For BPF_PROG_TYPE_TRACING, the bpf_prog's ctx is an array of u64. This patch borrows the idea from BPF_CALL_x in filter.h to convert a u64 to the arg type of the traced function. The new BPF_TRACE_x has an arg to specify the return type of a bpf_prog. It will be used in the future TCP-ops bpf_prog that may return "void". The new macros are defined in the new header file "bpf_trace_helpers.h". It is under selftests/bpf/ for now. It could be moved to libbpf later after seeing more upcoming non-tracing use cases. The tests are changed to use these new macros also. Hence, the k[s]u8/16/32/64 are no longer needed and they are removed from the bpf_helpers.h. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191123202504.1502696-1-kafai@fb.com
2019-11-24libbpf: Fix bpf_object name determination for bpf_object__open_file()Andrii Nakryiko
If bpf_object__open_file() gets path like "some/dir/obj.o", it should derive BPF object's name as "obj" (unless overriden through opts->object_name). Instead, due to using `path` as a fallback value for opts->obj_name, path is used as is for object name, so for above example BPF object's name will be verbatim "some/dir/obj", which leads to all sorts of troubles, especially when internal maps are concern (they are using up to 8 characters of object name). Fix that by ensuring object_name stays NULL, unless overriden. Fixes: 291ee02b5e40 ("libbpf: Refactor bpf_object__open APIs to use common opts") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191122003527.551556-1-andriin@fb.com
2019-11-24libbpf: Support initialized global variablesAndrii Nakryiko
Initialized global variables are no different in ELF from static variables, and don't require any extra support from libbpf. But they are matching semantics of global data (backed by BPF maps) more closely, preventing LLVM/Clang from aggressively inlining constant values and not requiring volatile incantations to prevent those. This patch enables global variables. It still disables uninitialized variables, which will be put into special COM (common) ELF section, because BPF doesn't allow uninitialized data to be accessed. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121070743.1309473-5-andriin@fb.com
2019-11-24libbpf: Fix various errors and warning reported by checkpatch.plAndrii Nakryiko
Fix a bunch of warnings and errors reported by checkpatch.pl, to make it easier to spot new problems. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121070743.1309473-4-andriin@fb.com
2019-11-24libbpf: Refactor relocation handlingAndrii Nakryiko
Relocation handling code is convoluted and unnecessarily deeply nested. Split out per-relocation logic into separate function. Also refactor the logic to be more a sequence of per-relocation type checks and processing steps, making it simpler to follow control flow. This makes it easier to further extends it to new kinds of relocations (e.g., support for extern variables). This patch also makes relocation's section verification more robust. Previously relocations against not yet supported externs were silently ignored because of obj->efile.text_shndx was zero, when all BPF programs had custom section names and there was no .text section. Also, invalid LDIMM64 relocations against non-map sections were passed through, if they were pointing to a .text section (or 0, which is invalid section). All these bugs are fixed within this refactoring and checks are made more appropriate for each type of relocation. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191121070743.1309473-3-andriin@fb.com
2019-11-22libtraceevent: Fix memory leakage in copy_filter_typeHewenliang
It is necessary to free the memory that we have allocated when error occurs. Fixes: ef3072cd1d5c ("tools lib traceevent: Get rid of die in add_filter_type()") Signed-off-by: Hewenliang <hewenliang4@huawei.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tzvetomir Stoyanov <tstoyanov@vmware.com> Link: http://lore.kernel.org/lkml/20191119014415.57210-1-hewenliang4@huawei.com Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-22libtraceevent: Fix header installationSudip Mukherjee
When we passed some location in DESTDIR, install_headers called do_install with DESTDIR as part of the second argument. But do_install is again using '$(DESTDIR_SQ)$2', so as a result the headers were installed in a location $DESTDIR/$DESTDIR. In my testing I passed DESTDIR=/home/sudip/test and the headers were installed in: /home/sudip/test/home/sudip/test/usr/include/traceevent. Lets remove DESTDIR from the second argument of do_install so that the headers are installed in the correct location. Signed-off-by: Sudipm Mukherjee <sudipm.mukherjee@gmail.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Sudipm Mukherjee <sudipm.mukherjee@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lore.kernel.org/lkml/20191114133719.309-1-sudipm.mukherjee@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-19libbpf: Fix call relocation offset calculation bugAndrii Nakryiko
When relocating subprogram call, libbpf doesn't take into account relo->text_off, which comes from symbol's value. This generally works fine for subprograms implemented as static functions, but breaks for global functions. Taking a simplified test_pkt_access.c as an example: __attribute__ ((noinline)) static int test_pkt_access_subprog1(volatile struct __sk_buff *skb) { return skb->len * 2; } __attribute__ ((noinline)) static int test_pkt_access_subprog2(int val, volatile struct __sk_buff *skb) { return skb->len + val; } SEC("classifier/test_pkt_access") int test_pkt_access(struct __sk_buff *skb) { if (test_pkt_access_subprog1(skb) != skb->len * 2) return TC_ACT_SHOT; if (test_pkt_access_subprog2(2, skb) != skb->len + 2) return TC_ACT_SHOT; return TC_ACT_UNSPEC; } When compiled, we get two relocations, pointing to '.text' symbol. .text has st_value set to 0 (it points to the beginning of .text section): 0000000000000008 000000050000000a R_BPF_64_32 0000000000000000 .text 0000000000000040 000000050000000a R_BPF_64_32 0000000000000000 .text test_pkt_access_subprog1 and test_pkt_access_subprog2 offsets (targets of two calls) are encoded within call instruction's imm32 part as -1 and 2, respectively: 0000000000000000 test_pkt_access_subprog1: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 1: 64 00 00 00 01 00 00 00 w0 <<= 1 2: 95 00 00 00 00 00 00 00 exit 0000000000000018 test_pkt_access_subprog2: 3: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 4: 04 00 00 00 02 00 00 00 w0 += 2 5: 95 00 00 00 00 00 00 00 exit 0000000000000000 test_pkt_access: 0: bf 16 00 00 00 00 00 00 r6 = r1 ===> 1: 85 10 00 00 ff ff ff ff call -1 2: bc 01 00 00 00 00 00 00 w1 = w0 3: b4 00 00 00 02 00 00 00 w0 = 2 4: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 5: 64 02 00 00 01 00 00 00 w2 <<= 1 6: 5e 21 08 00 00 00 00 00 if w1 != w2 goto +8 <LBB0_3> 7: bf 61 00 00 00 00 00 00 r1 = r6 ===> 8: 85 10 00 00 02 00 00 00 call 2 9: bc 01 00 00 00 00 00 00 w1 = w0 10: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 11: 04 02 00 00 02 00 00 00 w2 += 2 12: b4 00 00 00 ff ff ff ff w0 = -1 13: 1e 21 01 00 00 00 00 00 if w1 == w2 goto +1 <LBB0_3> 14: b4 00 00 00 02 00 00 00 w0 = 2 0000000000000078 LBB0_3: 15: 95 00 00 00 00 00 00 00 exit Now, if we compile example with global functions, the setup changes. Relocations are now against specifically test_pkt_access_subprog1 and test_pkt_access_subprog2 symbols, with test_pkt_access_subprog2 pointing 24 bytes into its respective section (.text), i.e., 3 instructions in: 0000000000000008 000000070000000a R_BPF_64_32 0000000000000000 test_pkt_access_subprog1 0000000000000048 000000080000000a R_BPF_64_32 0000000000000018 test_pkt_access_subprog2 Calls instructions now encode offsets relative to function symbols and are both set ot -1: 0000000000000000 test_pkt_access_subprog1: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) 1: 64 00 00 00 01 00 00 00 w0 <<= 1 2: 95 00 00 00 00 00 00 00 exit 0000000000000018 test_pkt_access_subprog2: 3: 61 20 00 00 00 00 00 00 r0 = *(u32 *)(r2 + 0) 4: 0c 10 00 00 00 00 00 00 w0 += w1 5: 95 00 00 00 00 00 00 00 exit 0000000000000000 test_pkt_access: 0: bf 16 00 00 00 00 00 00 r6 = r1 ===> 1: 85 10 00 00 ff ff ff ff call -1 2: bc 01 00 00 00 00 00 00 w1 = w0 3: b4 00 00 00 02 00 00 00 w0 = 2 4: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 5: 64 02 00 00 01 00 00 00 w2 <<= 1 6: 5e 21 09 00 00 00 00 00 if w1 != w2 goto +9 <LBB2_3> 7: b4 01 00 00 02 00 00 00 w1 = 2 8: bf 62 00 00 00 00 00 00 r2 = r6 ===> 9: 85 10 00 00 ff ff ff ff call -1 10: bc 01 00 00 00 00 00 00 w1 = w0 11: 61 62 00 00 00 00 00 00 r2 = *(u32 *)(r6 + 0) 12: 04 02 00 00 02 00 00 00 w2 += 2 13: b4 00 00 00 ff ff ff ff w0 = -1 14: 1e 21 01 00 00 00 00 00 if w1 == w2 goto +1 <LBB2_3> 15: b4 00 00 00 02 00 00 00 w0 = 2 0000000000000080 LBB2_3: 16: 95 00 00 00 00 00 00 00 exit Thus the right formula to calculate target call offset after relocation should take into account relocation's target symbol value (offset within section), call instruction's imm32 offset, and (subtracting, to get relative instruction offset) instruction index of call instruction itself. All that is shifted by number of instructions in main program, given all sub-programs are copied over after main program. Convert few selftests relying on bpf-to-bpf calls to use global functions instead of static ones. Fixes: 48cca7e44f9f ("libbpf: add support for bpf_call") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191119224447.3781271-1-andriin@fb.com
2019-11-19net-af_xdp: Use correct number of channels from ethtoolLuigi Rizzo
Drivers use different fields to report the number of channels, so take the maximum of all data channels (rx, tx, combined) when determining the size of the xsk map. The current code used only 'combined' which was set to 0 in some drivers e.g. mlx4. Tested: compiled and run xdpsock -q 3 -r -S on mlx4 Signed-off-by: Luigi Rizzo <lrizzo@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/bpf/20191119001951.92930-1-lrizzo@google.com
2019-11-18libtraceevent: Fix parsing of event %o and %X argument typesKonstantin Khlebnikov
Add missing "%o" and "%X". Ext4 events use "%o" for printing i_mode. Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Link: http://lore.kernel.org/lkml/157338066113.6548.11461421296091086041.stgit@buzz Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-18libbpf: Make global data internal arrays mmap()-able, if possibleAndrii Nakryiko
Add detection of BPF_F_MMAPABLE flag support for arrays and add it as an extra flag to internal global data maps, if supported by kernel. This allows users to memory-map global data and use it without BPF map operations, greatly simplifying user experience. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20191117172806.2195367-5-andriin@fb.com
2019-11-15libbpf: Add support for attaching BPF programs to other BPF programsAlexei Starovoitov
Extend libbpf api to pass attach_prog_fd into bpf_object__open. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-19-ast@kernel.org
2019-11-15selftests/bpf: Add test for BPF trampolineAlexei Starovoitov
Add sanity test for BPF trampoline that checks kernel functions with up to 6 arguments of different sizes. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-10-ast@kernel.org
2019-11-15libbpf: Add support to attach to fentry/fexit tracing progsAlexei Starovoitov
Teach libbpf to recognize tracing programs types and attach them to fentry/fexit. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-7-ast@kernel.org
2019-11-15libbpf: Introduce btf__find_by_name_kind()Alexei Starovoitov
Introduce btf__find_by_name_kind() helper to search BTF by name and kind, since name alone can be ambiguous. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-6-ast@kernel.org
2019-11-10libbpf: Allow for creating Rx or Tx only AF_XDP socketsMagnus Karlsson
The libbpf AF_XDP code is extended to allow for the creation of Rx only or Tx only sockets. Previously it returned an error if the socket was not initialized for both Rx and Tx. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: William Tu <u9012063@gmail.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Link: https://lore.kernel.org/bpf/1573148860-30254-4-git-send-email-magnus.karlsson@intel.com
2019-11-10libbpf: Support XDP_SHARED_UMEM with external XDP programMagnus Karlsson
Add support in libbpf to create multiple sockets that share a single umem. Note that an external XDP program need to be supplied that routes the incoming traffic to the desired sockets. So you need to supply the libbpf_flag XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD and load your own XDP program. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: William Tu <u9012063@gmail.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Link: https://lore.kernel.org/bpf/1573148860-30254-2-git-send-email-magnus.karlsson@intel.com
2019-11-10libbpf: Add getter for program sizeToke Høiland-Jørgensen
This adds a new getter for the BPF program size (in bytes). This is useful for a caller that is trying to predict how much memory will be locked by loading a BPF object into the kernel. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/157333185272.88376.10996937115395724683.stgit@toke.dk
2019-11-10libbpf: Add bpf_get_link_xdp_info() function to get more XDP informationToke Høiland-Jørgensen
Currently, libbpf only provides a function to get a single ID for the XDP program attached to the interface. However, it can be useful to get the full set of program IDs attached, along with the attachment mode, in one go. Add a new getter function to support this, using an extendible structure to carry the information. Express the old bpf_get_link_id() function in terms of the new function. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/157333185164.88376.7520653040667637246.stgit@toke.dk
2019-11-10libbpf: Use pr_warn() when printing netlink errorsToke Høiland-Jørgensen
The netlink functions were using fprintf(stderr, ) directly to print out error messages, instead of going through the usual logging macros. This makes it impossible for the calling application to silence or redirect those error messages. Fix this by switching to pr_warn() in nlattr.c and netlink.c. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/157333185055.88376.15999360127117901443.stgit@toke.dk
2019-11-10libbpf: Propagate EPERM to caller on program loadToke Høiland-Jørgensen
When loading an eBPF program, libbpf overrides the return code for EPERM errors instead of returning it to the caller. This makes it hard to figure out what went wrong on load. In particular, EPERM is returned when the system rlimit is too low to lock the memory required for the BPF program. Previously, this was somewhat obscured because the rlimit error would be hit on map creation (which does return it correctly). However, since maps can now be reused, object load can proceed all the way to loading programs without hitting the error; propagating it even in this case makes it possible for the caller to react appropriately (and, e.g., attempt to raise the rlimit before retrying). Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/157333184946.88376.11768171652794234561.stgit@toke.dk
2019-11-10libbpf: Unpin auto-pinned maps if loading failsToke Høiland-Jørgensen
Since the automatic map-pinning happens during load, it will leave pinned maps around if the load fails at a later stage. Fix this by unpinning any pinned maps on cleanup. To avoid unpinning pinned maps that were reused rather than newly pinned, add a new boolean property on struct bpf_map to keep track of whether that map was reused or not; and only unpin those maps that were not reused. Fixes: 57a00f41644f ("libbpf: Add auto-pinning of maps when loading BPF objects") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/157333184731.88376.9992935027056165873.stgit@toke.dk
2019-11-07libbpf: Improve handling of corrupted ELF during map initializationAndrii Nakryiko
If we get ELF file with "maps" section, but no symbols pointing to it, we'll end up with division by zero. Add check against this situation and exit early with error. Found by Coverity scan against Github libbpf sources. Fixes: bf82927125dd ("libbpf: refactor map initialization") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-6-andriin@fb.com
2019-11-07libbpf: Make btf__resolve_size logic always check size error conditionAndrii Nakryiko
Perform size check always in btf__resolve_size. Makes the logic a bit more robust against corrupted BTF and silences LGTM/Coverity complaining about always true (size < 0) check. Fixes: 69eaab04c675 ("btf: extract BTF type size calculation") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-5-andriin@fb.com
2019-11-07libbpf: Fix another potential overflow issue in bpf_prog_linfoAndrii Nakryiko
Fix few issues found by Coverity and LGTM. Fixes: b053b439b72a ("bpf: libbpf: bpftool: Print bpf_line_info during prog dump") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-4-andriin@fb.com
2019-11-07libbpf: Fix potential overflow issueAndrii Nakryiko
Fix a potential overflow issue found by LGTM analysis, based on Github libbpf source code. Fixes: 3d65014146c6 ("bpf: libbpf: Add btf_line_info support to libbpf") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-3-andriin@fb.com
2019-11-07libbpf: Fix memory leak/double free issueAndrii Nakryiko
Coverity scan against Github libbpf code found the issue of not freeing memory and leaving already freed memory still referenced from bpf_program. Fix it by re-assigning successfully reallocated memory sooner. Fixes: 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-2-andriin@fb.com
2019-11-07libbpf: Fix negative FD close() in xsk_setup_xdp_prog()Andrii Nakryiko
Fix issue reported by static analysis (Coverity). If bpf_prog_get_fd_by_id() fails, xsk_lookup_bpf_maps() will fail as well and clean-up code will attempt close() with fd=-1. Fix by checking bpf_prog_get_fd_by_id() return result and exiting early. Fixes: 10a13bb40e54 ("libbpf: remove qidconf and better support external bpf programs.") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107054059.313884-1-andriin@fb.com
2019-11-06libbpf: Simplify BPF_CORE_READ_BITFIELD_PROBED usageAndrii Nakryiko
Streamline BPF_CORE_READ_BITFIELD_PROBED interface to follow BPF_CORE_READ_BITFIELD (direct) and BPF_CORE_READ, in general, i.e., just return read result or 0, if underlying bpf_probe_read() failed. In practice, real applications rarely check bpf_probe_read() result, because it has to always work or otherwise it's a bug. So propagating internal bpf_probe_read() error from this macro hurts usability without providing real benefits in practice. This patch fixes the issue and simplifies usage, noticeable even in selftest itself. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191106201500.2582438-1-andriin@fb.com
2019-11-06libsubcmd: Use -O0 with DEBUG=1James Clark
When a 'make DEBUG=1' build is done, the command parser is still built with -O6 and is hard to step through, fix it making it use -O0 in that case. Signed-off-by: James Clark <james.clark@arm.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: nd <nd@arm.com> Link: http://lore.kernel.org/lkml/20191028113340.4282-1-james.clark@arm.com [ split from a larger patch ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-06libsubcmd: Move EXTRA_FLAGS to the end to allow overriding existing flagsJames Clark
Move EXTRA_WARNINGS and EXTRA_FLAGS to the end of the compilation line, otherwise they cannot be used to override the default values. Signed-off-by: James Clark <james.clark@arm.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: nd <nd@arm.com> Link: http://lore.kernel.org/lkml/20191028113340.4282-1-james.clark@arm.com [ split from a larger patch ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-11-04libbpf: Add support for field size relocationsAndrii Nakryiko
Add bpf_core_field_size() macro, capturing a relocation against field size. Adjust bits of internal libbpf relocation logic to allow capturing size relocations of various field types: arrays, structs/unions, enums, etc. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191101222810.1246166-4-andriin@fb.com
2019-11-04libbpf: Add support for relocatable bitfieldsAndrii Nakryiko
Add support for the new field relocation kinds, necessary to support relocatable bitfield reads. Provide macro for abstracting necessary code doing full relocatable bitfield extraction into u64 value. Two separate macros are provided: - BPF_CORE_READ_BITFIELD macro for direct memory read-enabled BPF programs (e.g., typed raw tracepoints). It uses direct memory dereference to extract bitfield backing integer value. - BPF_CORE_READ_BITFIELD_PROBED macro for cases where bpf_probe_read() needs to be used to extract same backing integer value. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191101222810.1246166-3-andriin@fb.com
2019-11-02libbpf: Add auto-pinning of maps when loading BPF objectsToke Høiland-Jørgensen
This adds support to libbpf for setting map pinning information as part of the BTF map declaration, to get automatic map pinning (and reuse) on load. The pinning type currently only supports a single PIN_BY_NAME mode, where each map will be pinned by its name in a path that can be overridden, but defaults to /sys/fs/bpf. Since auto-pinning only does something if any maps actually have a 'pinning' BTF attribute set, we default the new option to enabled, on the assumption that seamless pinning is what most callers want. When a map has a pin_path set at load time, libbpf will compare the map pinned at that location (if any), and if the attributes match, will re-use that map instead of creating a new one. If no existing map is found, the newly created map will instead be pinned at the location. Programs wanting to customise the pinning can override the pinning paths using bpf_map__set_pin_path() before calling bpf_object__load() (including setting it to NULL to disable pinning of a particular map). Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/157269298092.394725.3966306029218559681.stgit@toke.dk
2019-11-02libbpf: Move directory creation into _pin() functionsToke Høiland-Jørgensen
The existing pin_*() functions all try to create the parent directory before pinning. Move this check into the per-object _pin() functions instead. This ensures consistent behaviour when auto-pinning is added (which doesn't go through the top-level pin_maps() function), at the cost of a few more calls to mkdir(). Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/157269297985.394725.5882630952992598610.stgit@toke.dk
2019-11-02libbpf: Store map pin path and status in struct bpf_mapToke Høiland-Jørgensen
Support storing and setting a pin path in struct bpf_map, which can be used for automatic pinning. Also store the pin status so we can avoid attempts to re-pin a map that has already been pinned (or reused from a previous pinning). The behaviour of bpf_object__{un,}pin_maps() is changed so that if it is called with a NULL path argument (which was previously illegal), it will (un)pin only those maps that have a pin_path set. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/157269297876.394725.14782206533681896279.stgit@toke.dk
2019-11-02libbpf: Fix error handling in bpf_map__reuse_fd()Toke Høiland-Jørgensen
bpf_map__reuse_fd() was calling close() in the error path before returning an error value based on errno. However, close can change errno, so that can lead to potentially misleading error messages. Instead, explicitly store errno in the err variable before each goto. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/157269297769.394725.12634985106772698611.stgit@toke.dk
2019-10-31libbpf: Add support for prog_tracingAlexei Starovoitov
Cleanup libbpf from expected_attach_type == attach_btf_id hack and introduce BPF_PROG_TYPE_TRACING. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191030223212.953010-3-ast@kernel.org