summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)Author
2023-09-18kselftest: rtnetlink: add pause and pause on fail flagDaniel Mendes
'Pause' prompts the user to press Enter to continue running tests once one test has finished. Pause on fail on prompts the user to press enter only when a test fails. Modifications to kci_test_addrlft() and kci_test_ipsec_offload() ensure that whenever end_test is called, [$ret -ne 0] indicates failure. This allows end_test to really easily implement pause on fail functionality. Signed-off-by: Daniel Mendes <dmendes@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-18kselftest: rtnetlink.sh: add verbose flagDaniel Mendes
Uses a run_cmd helper function similar to other selftests to add verbose functionality i.e. print executed commands and their outputs Many commands silence or redirect output. This can be removed since the verbose helper function captures output anyway and only outputs it if VERBOSE is true. Similarly, the helper command for pipes to grep searches stderr and stdout. This makes output redirection unnecessary in those cases. Signed-off-by: Daniel Mendes <dmendes@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-17vsock/test: track bytes in sk_buff merging test for SOCK_SEQPACKETStefano Garzarella
The test was a bit complicated to read. Added variables to keep track of the bytes read and to be read in each step. Also some comments. The test is unchanged. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Arseniy Krasnov <avkrasnov@salutedevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-17vsock/test: use send_buf() in vsock_test.cStefano Garzarella
We have a very common pattern used in vsock_test that we can now replace with the new send_buf(). This allows us to reuse the code we already had to check the actual return value and wait for all the bytes to be sent with an appropriate timeout. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Arseniy Krasnov <avkrasnov@salutedevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-17vsock/test: add send_buf() utility functionStefano Garzarella
Move the code of send_byte() out in a new utility function that can be used to send a generic buffer. This new function can be used when we need to send a custom buffer and not just a single 'A' byte. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Arseniy Krasnov <avkrasnov@salutedevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-17vsock/test: use recv_buf() in vsock_test.cStefano Garzarella
We have a very common pattern used in vsock_test that we can now replace with the new recv_buf(). This allows us to reuse the code we already had to check the actual return value and wait for all bytes to be received with an appropriate timeout. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Arseniy Krasnov <avkrasnov@salutedevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-17vsock/test: add recv_buf() utility functionStefano Garzarella
Move the code of recv_byte() out in a new utility function that can be used to receive a generic buffer. This new function can be used when we need to receive a custom buffer and not just a single 'A' byte. Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Arseniy Krasnov <avkrasnov@salutedevices.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-17Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Alexei Starovoitov says: ==================== The following pull-request contains BPF updates for your *net-next* tree. We've added 73 non-merge commits during the last 9 day(s) which contain a total of 79 files changed, 5275 insertions(+), 600 deletions(-). The main changes are: 1) Basic BTF validation in libbpf, from Andrii Nakryiko. 2) bpf_assert(), bpf_throw(), exceptions in bpf progs, from Kumar Kartikeya Dwivedi. 3) next_thread cleanups, from Oleg Nesterov. 4) Add mcpu=v4 support to arm32, from Puranjay Mohan. 5) Add support for __percpu pointers in bpf progs, from Yonghong Song. 6) Fix bpf tailcall interaction with bpf trampoline, from Leon Hwang. 7) Raise irq_work in bpf_mem_alloc while irqs are disabled to improve refill probabablity, from Hou Tao. Please consider pulling these changes from: git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git Thanks a lot! Also thanks to reporters, reviewers and testers of commits in this pull-request: Alan Maguire, Andrey Konovalov, Dave Marchevsky, "Eric W. Biederman", Jiri Olsa, Maciej Fijalkowski, Quentin Monnet, Russell King (Oracle), Song Liu, Stanislav Fomichev, Yonghong Song ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-16selftests/bpf: Add tests for BPF exceptionsKumar Kartikeya Dwivedi
Add selftests to cover success and failure cases of API usage, runtime behavior and invariants that need to be maintained for implementation correctness. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230912233214.1518551-18-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-16selftests/bpf: Add BPF assertion macrosKumar Kartikeya Dwivedi
Add macros implementing an 'assert' statement primitive using macros, built on top of the BPF exceptions support introduced in previous patches. The bpf_assert_*_with variants allow supplying a value which can the be inspected within the exception handler to signify the assert statement that led to the program being terminated abruptly, or be returned by the default exception handler. Note that only 64-bit scalar values are supported with these assertion macros, as during testing I found other cases quite unreliable in presence of compiler shifts/manipulations extracting the value of the right width from registers scrubbing the verifier's bounds information and knowledge about the value in the register. Thus, it is easier to reliably support this feature with only the full register width, and support both signed and unsigned variants. The bpf_assert_range is interesting in particular, which clamps the value in the [begin, end] (both inclusive) range within verifier state, and emits a check for the same at runtime. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230912233214.1518551-17-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-16libbpf: Add support for custom exception callbacksKumar Kartikeya Dwivedi
Add support to libbpf to append exception callbacks when loading a program. The exception callback is found by discovering the declaration tag 'exception_callback:<value>' and finding the callback in the value of the tag. The process is done in two steps. First, for each main program, the bpf_object__sanitize_and_load_btf function finds and marks its corresponding exception callback as defined by the declaration tag on it. Second, bpf_object__reloc_code is modified to append the indicated exception callback at the end of the instruction iteration (since exception callback will never be appended in that loop, as it is not directly referenced). Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230912233214.1518551-16-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-16libbpf: Refactor bpf_object__reloc_codeKumar Kartikeya Dwivedi
Refactor bpf_object__append_subprog_code out of bpf_object__reloc_code to be able to reuse it to append subprog related code for the exception callback to the main program. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230912233214.1518551-15-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-16bpf: Add support for custom exception callbacksKumar Kartikeya Dwivedi
By default, the subprog generated by the verifier to handle a thrown exception hardcodes a return value of 0. To allow user-defined logic and modification of the return value when an exception is thrown, introduce the 'exception_callback:' declaration tag, which marks a callback as the default exception handler for the program. The format of the declaration tag is 'exception_callback:<value>', where <value> is the name of the exception callback. Each main program can be tagged using this BTF declaratiion tag to associate it with an exception callback. In case the tag is absent, the default callback is used. As such, the exception callback cannot be modified at runtime, only set during verification. Allowing modification of the callback for the current program execution at runtime leads to issues when the programs begin to nest, as any per-CPU state maintaing this information will have to be saved and restored. We don't want it to stay in bpf_prog_aux as this takes a global effect for all programs. An alternative solution is spilling the callback pointer at a known location on the program stack on entry, and then passing this location to bpf_throw as a parameter. However, since exceptions are geared more towards a use case where they are ideally never invoked, optimizing for this use case and adding to the complexity has diminishing returns. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230912233214.1518551-7-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-16bpf: Implement BPF exceptionsKumar Kartikeya Dwivedi
This patch implements BPF exceptions, and introduces a bpf_throw kfunc to allow programs to throw exceptions during their execution at runtime. A bpf_throw invocation is treated as an immediate termination of the program, returning back to its caller within the kernel, unwinding all stack frames. This allows the program to simplify its implementation, by testing for runtime conditions which the verifier has no visibility into, and assert that they are true. In case they are not, the program can simply throw an exception from the other branch. BPF exceptions are explicitly *NOT* an unlikely slowpath error handling primitive, and this objective has guided design choices of the implementation of the them within the kernel (with the bulk of the cost for unwinding the stack offloaded to the bpf_throw kfunc). The implementation of this mechanism requires use of add_hidden_subprog mechanism introduced in the previous patch, which generates a couple of instructions to move R1 to R0 and exit. The JIT then rewrites the prologue of this subprog to take the stack pointer and frame pointer as inputs and reset the stack frame, popping all callee-saved registers saved by the main subprog. The bpf_throw function then walks the stack at runtime, and invokes this exception subprog with the stack and frame pointers as parameters. Reviewers must take note that currently the main program is made to save all callee-saved registers on x86_64 during entry into the program. This is because we must do an equivalent of a lightweight context switch when unwinding the stack, therefore we need the callee-saved registers of the caller of the BPF program to be able to return with a sane state. Note that we have to additionally handle r12, even though it is not used by the program, because when throwing the exception the program makes an entry into the kernel which could clobber r12 after saving it on the stack. To be able to preserve the value we received on program entry, we push r12 and restore it from the generated subprogram when unwinding the stack. For now, bpf_throw invocation fails when lingering resources or locks exist in that path of the program. In a future followup, bpf_throw will be extended to perform frame-by-frame unwinding to release lingering resources for each stack frame, removing this limitation. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230912233214.1518551-5-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-15selftest, bpf: enable cpu v4 tests for arm32Puranjay Mohan
Now that all the cpuv4 instructions are supported by the arm32 JIT, enable the selftests for arm32. Signed-off-by: Puranjay Mohan <puranjay12@gmail.com> Link: https://lore.kernel.org/r/20230907230550.1417590-8-puranjay12@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-15tools: ynl: extend netdev sample to dump xdp-rx-metadata-featuresStanislav Fomichev
The tool can be used to verify that everything works end to end. Unrelated updates: - include tools/include/uapi to pick the latest kernel uapi headers - print "xdp-features" and "xdp-rx-metadata-features" so it's clear which bitmask is being dumped Cc: netdev@vger.kernel.org Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/20230913171350.369987-4-sdf@google.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-09-15bpf: expose information about supported xdp metadata kfuncStanislav Fomichev
Add new xdp-rx-metadata-features member to netdev netlink which exports a bitmask of supported kfuncs. Most of the patch is autogenerated (headers), the only relevant part is netdev.yaml and the changes in netdev-genl.c to marshal into netlink. Example output on veth: $ ip link add veth0 type veth peer name veth1 # ifndex == 12 $ ./tools/net/ynl/samples/netdev 12 Select ifc ($ifindex; or 0 = dump; or -2 ntf check): 12 veth1[12] xdp-features (23): basic redirect rx-sg xdp-rx-metadata-features (3): timestamp hash xdp-zc-max-segs=0 Cc: netdev@vger.kernel.org Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/20230913171350.369987-3-sdf@google.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-09-14selftests/bpf: Skip module_fentry_shadow test when bpf_testmod is not availableArtem Savkov
This test relies on bpf_testmod, so skip it if the module is not available. Fixes: aa3d65de4b900 ("bpf/selftests: Test fentry attachment to shadowed functions") Signed-off-by: Artem Savkov <asavkov@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230914124928.340701-1-asavkov@redhat.com
2023-09-14Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netPaolo Abeni
Cross-merge networking fixes after downstream PR. No conflicts. Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-09-14Merge tag 'net-6.6-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Quite unusually, this does not contains any fix coming from subtrees (nf, ebpf, wifi, etc). Current release - regressions: - bcmasp: fix possible OOB write in bcmasp_netfilt_get_all_active() Previous releases - regressions: - ipv4: fix one memleak in __inet_del_ifa() - tcp: fix bind() regressions for v4-mapped-v6 addresses. - tls: do not free tls_rec on async operation in bpf_exec_tx_verdict() - dsa: fixes for SJA1105 FDB regressions - veth: update XDP feature set when bringing up device - igb: fix hangup when enabling SR-IOV Previous releases - always broken: - kcm: fix memory leak in error path of kcm_sendmsg() - smc: fix data corruption in smcr_port_add - microchip: fix possible memory leak for vcap_dup_rule()" * tag 'net-6.6-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (37 commits) kcm: Fix error handling for SOCK_DGRAM in kcm_sendmsg(). net: renesas: rswitch: Add spin lock protection for irq {un}mask net: renesas: rswitch: Fix unmasking irq condition igb: clean up in all error paths when enabling SR-IOV ixgbe: fix timestamp configuration code selftest: tcp: Add v4-mapped-v6 cases in bind_wildcard.c. selftest: tcp: Move expected_errno into each test case in bind_wildcard.c. selftest: tcp: Fix address length in bind_wildcard.c. tcp: Fix bind() regression for v4-mapped-v6 non-wildcard address. tcp: Fix bind() regression for v4-mapped-v6 wildcard address. tcp: Factorise sk_family-independent comparison in inet_bind2_bucket_match(_addr_any). ipv6: fix ip6_sock_set_addr_preferences() typo veth: Update XDP feature set when bringing up device net: macb: fix sleep inside spinlock net/tls: do not free tls_rec on async operation in bpf_exec_tx_verdict() net: ethernet: mtk_eth_soc: fix pse_port configuration for MT7988 net: ethernet: mtk_eth_soc: fix uninitialized variable kcm: Fix memory leak in error path of kcm_sendmsg() r8152: check budget for r8152_poll() net: dsa: sja1105: block FDB accesses that are concurrent with a switch reset ...
2023-09-14selftests/xsk: display command line options with -hMagnus Karlsson
Add the -h option to display all available command line options available for test_xsk.sh and xskxceiver. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-11-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-14selftests/xsk: fail single test instead of all testsMagnus Karlsson
In a number of places at en error, exit_with_error() is called that terminates the whole test suite. This is not always desirable as it would be more logical to only fail that test and then go along with the other ones. So change this in a number of places in which I thought it would be more logical to just fail the test in question. Examples of this are in code that is only used by a single test. Also delete a pointless if-statement in receive_pkts() that has an exit_with_error() in it. It can never occur since the return value is an unsigned and the test is for less than zero. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-10-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-14selftests/xsk: use ksft_print_msg uniformlyMagnus Karlsson
Use ksft_print_msg() instead of printf() and fprintf() in all places as the ksefltests framework is being used. There is only one exception and that is for the list-of-tests print out option, since no tests are run in that case. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-9-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-14selftests/xsk: add option to run single testMagnus Karlsson
Add a command line option to be able to run a single test. This option (-t) takes a number from the list of tests available with the "-l" option. Here are two examples: Run test number 2, the "receive single packet" test in all available modes: ./test_xsk.sh -t 2 Run test number 21, the metadata copy test in skb mode only ./test_xsh.sh -t 21 -m skb Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-8-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-14selftests/xsk: add option that lists all testsMagnus Karlsson
Add a command line option (-l) that lists all the tests. The number before the test will be used in the next commit for specifying a single test to run. Here is an example of the output: Tests: 0: SEND_RECEIVE 1: SEND_RECEIVE_2K_FRAME 2: SEND_RECEIVE_SINGLE_PKT 3: POLL_RX 4: POLL_TX 5: POLL_RXQ_FULL 6: POLL_TXQ_FULL 7: SEND_RECEIVE_UNALIGNED : : Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-7-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-14selftests/xsk: declare test names in structMagnus Karlsson
Declare the test names statically in a struct so that we can refer to them when adding the support to execute a single test in the next commit. Before this patch, the names of them were not declared in a single place which made it not possible to refer to them. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-6-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-14selftests/xsk: move all tests to separate functionsMagnus Karlsson
Prepare for the capability to be able to run a single test by moving all the tests to their own functions. This function can then be called to execute that test in the next commit. Also, the tests named RUN_TO_COMPLETION_* were not named well, so change them to SEND_RECEIVE_* as it is just a basic send and receive test of 4K packets. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-5-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-14selftests/xsk: add option to only run tests in a single modeMagnus Karlsson
Add an option -m on the command line that allows the user to run the tests in a single mode instead of all of them. Valid modes are skb, drv, and zc (zero-copy). An example: To run test suite in drv mode only: ./test_xsk.sh -m drv Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-4-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-14selftests/xsk: add timeout for Tx threadMagnus Karlsson
Add a timeout for the transmission thread. If packets are not completed properly, for some reason, the test harness would previously get stuck forever in a while loop. But with this patch, this timeout will trigger, flag the test as a failure, and continue with the next test. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-3-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-14selftests/xsk: print per packet info in verbose modeMagnus Karlsson
Print info about every packet in verbose mode, both for Tx and Rx. This is useful to have when a test fails or to validate that a test is really doing what it was designed to do. Info on what is supposed to be received and sent is also printed for the custom packet streams since they differ from the base line. Here is an example: Tx addr: 37e0 len: 64 options: 0 pkt_nb: 8 Tx addr: 4000 len: 64 options: 0 pkt_nb: 9 Rx: addr: 100 len: 64 options: 0 pkt_nb: 0 valid: 1 Rx: addr: 1100 len: 64 options: 0 pkt_nb: 1 valid: 1 Rx: addr: 2100 len: 64 options: 0 pkt_nb: 4 valid: 1 Rx: addr: 3100 len: 64 options: 0 pkt_nb: 8 valid: 1 Rx: addr: 4100 len: 64 options: 0 pkt_nb: 9 valid: 1 One pointless verbose print statement is also deleted and another one is made clearer. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-2-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-14test/vsock: shutdowned socket testArseniy Krasnov
This adds two tests for 'shutdown()' call. It checks that SIGPIPE is sent when MSG_NOSIGNAL is not set and vice versa. Both flags SHUT_WR and SHUT_RD are tested. Signed-off-by: Arseniy Krasnov <avkrasnov@salutedevices.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2023-09-13Merge tag 'trace-v6.6-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace Pull tracing fixes from Steven Rostedt: - Add missing LOCKDOWN checks for eventfs callers When LOCKDOWN is active for tracing, it causes inconsistent state when some functions succeed and others fail. - Use dput() to free the top level eventfs descriptor There was a race between accesses and freeing it. - Fix a long standing bug that eventfs exposed due to changing timings by dynamically creating files. That is, If a event file is opened for an instance, there's nothing preventing the instance from being removed which will make accessing the files cause use-after-free bugs. - Fix a ring buffer race that happens when iterating over the ring buffer while writers are active. Check to make sure not to read the event meta data if it's beyond the end of the ring buffer sub buffer. - Fix the print trigger that disappeared because the test to create it was looking for the event dir field being filled, but now it has the "ef" field filled for the eventfs structure. - Remove the unused "dir" field from the event structure. - Fix the order of the trace_dynamic_info as it had it backwards for the offset and len fields for which one was for which endianess. - Fix NULL pointer dereference with eventfs_remove_rec() If an allocation fails in one of the eventfs_add_*() functions, the caller of it in event_subsystem_dir() or event_create_dir() assigns the result to the structure. But it's assigning the ERR_PTR and not NULL. This was passed to eventfs_remove_rec() which expects either a good pointer or a NULL, not ERR_PTR. The fix is to not assign the ERR_PTR to the structure, but to keep it NULL on error. - Fix list_for_each_rcu() to use list_for_each_srcu() in dcache_dir_open_wrapper(). One iteration of the code used RCU but because it had to call sleepable code, it had to be changed to use SRCU, but one of the iterations was missed. - Fix synthetic event print function to use "as_u64" instead of passing in a pointer to the union. To fix big/little endian issues, the u64 that represented several types was turned into a union to define the types properly. * tag 'trace-v6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: eventfs: Fix the NULL pointer dereference bug in eventfs_remove_rec() tracefs/eventfs: Use list_for_each_srcu() in dcache_dir_open_wrapper() tracing/synthetic: Print out u64 values properly tracing/synthetic: Fix order of struct trace_dynamic_info selftests/ftrace: Fix dependencies for some of the synthetic event tests tracing: Remove unused trace_event_file dir field tracing: Use the new eventfs descriptor for print trigger ring-buffer: Do not attempt to read past "commit" tracefs/eventfs: Free top level files on removal ring-buffer: Avoid softlockup in ring_buffer_resize() tracing: Have event inject files inc the trace array ref count tracing: Have option files inc the trace array ref count tracing: Have current_trace inc the trace array ref count tracing: Have tracing_max_latency inc the trace array ref count tracing: Increase trace array ref count on enable and filter files tracefs/eventfs: Use dput to free the toplevel events directory tracefs/eventfs: Add missing lockdown checks tracefs: Add missing lockdown check to tracefs_create_dir()
2023-09-13selftests/tc-testing: cls_u32: add tests for classidPedro Tammela
As discussed in '3044b16e7c6f', cls_u32 was handling the use of classid incorrectly. Add a test to check if it's conforming to the correct behaviour. Reviewed-by: Victor Nogueira <victor@mojatatu.com> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-13selftests/tc-testing: cls_route: add tests for classidPedro Tammela
As discussed in 'b80b829e9e2c', cls_route was handling the use of classid incorrectly. Add a test to check if it's conforming to the correct behaviour. Reviewed-by: Victor Nogueira <victor@mojatatu.com> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-13selftests/tc-testing: cls_fw: add tests for classidPedro Tammela
As discussed in '76e42ae83199', cls_fw was handling the use of classid incorrectly. Add a few tests to check if it's conforming to the correct behaviour. Reviewed-by: Victor Nogueira <victor@mojatatu.com> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-13selftest: tcp: Add v4-mapped-v6 cases in bind_wildcard.c.Kuniyuki Iwashima
We add these 8 test cases in bind_wildcard.c to check bind() conflicts. 1st bind() 2nd bind() --------- --------- 0.0.0.0 ::FFFF:0.0.0.0 ::FFFF:0.0.0.0 0.0.0.0 0.0.0.0 ::FFFF:127.0.0.1 ::FFFF:127.0.0.1 0.0.0.0 127.0.0.1 ::FFFF:0.0.0.0 ::FFFF:0.0.0.0 127.0.0.1 127.0.0.1 ::FFFF:127.0.0.1 ::FFFF:127.0.0.1 127.0.0.1 All test passed without bhash2 and with bhash2 and this series. Before bhash2: $ uname -r 6.0.0-rc1-00393-g0bf73255d3a3 $ ./bind_wildcard ... # PASSED: 16 / 16 tests passed. Just after bhash2: $ uname -r 6.0.0-rc1-00394-g28044fc1d495 $ ./bind_wildcard ... ok 15 bind_wildcard.v4_local_v6_v4mapped_local.v4_v6 not ok 16 bind_wildcard.v4_local_v6_v4mapped_local.v6_v4 # FAILED: 15 / 16 tests passed. On net.git: $ ./bind_wildcard ... not ok 14 bind_wildcard.v4_local_v6_v4mapped_any.v6_v4 not ok 16 bind_wildcard.v4_local_v6_v4mapped_local.v6_v4 # FAILED: 13 / 16 tests passed. With this series: $ ./bind_wildcard ... # PASSED: 16 / 16 tests passed. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-13selftest: tcp: Move expected_errno into each test case in bind_wildcard.c.Kuniyuki Iwashima
This is a preparation patch for the following patch. Let's define expected_errno in each test case so that we can add other test cases easily. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-13selftest: tcp: Fix address length in bind_wildcard.c.Kuniyuki Iwashima
The selftest passes the IPv6 address length for an IPv4 address. We should pass the correct length. Note inet_bind_sk() does not check if the size is larger than sizeof(struct sockaddr_in), so there is no real bug in this selftest. Fixes: 13715acf8ab5 ("selftest: Add test for bind() conflicts.") Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-12selftests/bpf: Add testcases for tailcall infinite loop fixingLeon Hwang
Add 4 test cases to confirm the tailcall infinite loop bug has been fixed. Like tailcall_bpf2bpf cases, do fentry/fexit on the bpf2bpf, and then check the final count result. tools/testing/selftests/bpf/test_progs -t tailcalls 226/13 tailcalls/tailcall_bpf2bpf_fentry:OK 226/14 tailcalls/tailcall_bpf2bpf_fexit:OK 226/15 tailcalls/tailcall_bpf2bpf_fentry_fexit:OK 226/16 tailcalls/tailcall_bpf2bpf_fentry_entry:OK 226 tailcalls:OK Summary: 1/16 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Leon Hwang <hffilwlqm@gmail.com> Link: https://lore.kernel.org/r/20230912150442.2009-4-hffilwlqm@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-09-12Merge tag 'linux-kselftest-next-6.6-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull kselftest fixes from Shuah Khan: - kselftest runner script to propagate SIGTERM to runner child to avoid kselftest hang - install symlinks required for test execution to avoid test failures - kselftest dependency checker script argument parsing * tag 'linux-kselftest-next-6.6-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: selftests: Keep symlinks, when possible selftests: fix dependency checker script kselftest/runner.sh: Propagate SIGTERM to runner child selftests/ftrace: Correctly enable event in instance-event.tc
2023-09-11selftests/bpf: Correct map_fd to data_fd in tailcallsLeon Hwang
Get and check data_fd. It should not check map_fd again. Meanwhile, correct some 'return' to 'goto out'. Thank the suggestion from Maciej in "bpf, x64: Fix tailcall infinite loop"[0] discussions. [0] https://lore.kernel.org/bpf/e496aef8-1f80-0f8e-dcdd-25a8c300319a@gmail.com/T/#m7d3b601066ba66400d436b7e7579b2df4a101033 Fixes: 79d49ba048ec ("bpf, testing: Add various tail call test cases") Fixes: 3b0379111197 ("selftests/bpf: Add tailcall_bpf2bpf tests") Fixes: 5e0b0a4c52d3 ("selftests/bpf: Test tail call counting with bpf2bpf and data on stack") Signed-off-by: Leon Hwang <hffilwlqm@gmail.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/r/20230906154256.95461-1-hffilwlqm@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-09-10selftests/net: Improve bind_bhash.sh to accommodate predictable network ↵Juntong Deng
interface names Starting with v197, systemd uses predictable interface network names, the traditional interface naming scheme (eth0) is deprecated, therefore it cannot be assumed that the eth0 interface exists on the host. This modification makes the bind_bhash test program run in a separate network namespace and no longer needs to consider the name of the network interface on the host. Signed-off-by: Juntong Deng <juntong.deng@outlook.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-09-09Merge tag 'perf-tools-for-v6.6-1-2023-09-05' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools Pull perf tools updates from Arnaldo Carvalho de Melo: "perf tools maintainership: - Add git information for perf-tools and perf-tools-next trees and branches to the MAINTAINERS file. That is where development now takes place and myself and Namhyung Kim have write access, more people to come as we emulate other maintainer groups. perf record: - Record kernel data maps when 'perf record --data' is used, so that global variables can be resolved and used in tools that do data profiling. perf trace: - Remove the old, experimental support for BPF events in which a .c file was passed as an event: "perf trace -e hello.c" to then get compiled and loaded. The only known usage for that, that shipped with the kernel as an example for such events, augmented the raw_syscalls tracepoints and was converted to a libbpf skeleton, reusing all the user space components and the BPF code connected to the syscalls. In the end just the way to glue the BPF part and the user space type beautifiers changed, now being performed by libbpf skeletons. The next step is to use BTF to do pretty printing of all syscall types, as discussed with Alan Maguire and others. Now, on a perf built with BUILD_BPF_SKEL=1 we get most if not all path/filenames/strings, some of the networking data structures, perf_event_attr, etc, i.e. systemwide tracing of nanosleep calls and perf_event_open syscalls while 'perf stat' runs 'sleep' for 5 seconds: # perf trace -a -e *nanosleep,perf* perf stat -e cycles,instructions sleep 5 0.000 ( 9.034 ms): perf/327641 perf_event_open(attr_uptr: { type: 0 (PERF_TYPE_HARDWARE), size: 136, config: 0 (PERF_COUNT_HW_CPU_CYCLES), sample_type: IDENTIFIER, read_format: TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING, disabled: 1, inherit: 1, enable_on_exec: 1, exclude_guest: 1 }, pid: 327642 (perf), cpu: -1, group_fd: -1, flags: FD_CLOEXEC) = 3 9.039 ( 0.006 ms): perf/327641 perf_event_open(attr_uptr: { type: 0 (PERF_TYPE_HARDWARE), size: 136, config: 0x1 (PERF_COUNT_HW_INSTRUCTIONS), sample_type: IDENTIFIER, read_format: TOTAL_TIME_ENABLED|TOTAL_TIME_RUNNING, disabled: 1, inherit: 1, enable_on_exec: 1, exclude_guest: 1 }, pid: 327642 (perf-exec), cpu: -1, group_fd: -1, flags: FD_CLOEXEC) = 4 ? ( ): gpm/991 ... [continued]: clock_nanosleep()) = 0 10.133 ( ): sleep/327642 clock_nanosleep(rqtp: { .tv_sec: 5, .tv_nsec: 0 }, rmtp: 0x7ffd36f83ed0) ... ? ( ): pool-gsd-smart/3051 ... [continued]: clock_nanosleep()) = 0 30.276 ( ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ... 223.215 (1000.430 ms): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) = 0 30.276 (2000.394 ms): gpm/991 ... [continued]: clock_nanosleep()) = 0 1230.814 ( ): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) ... 1230.814 (1000.404 ms): pool-gsd-smart/3051 ... [continued]: clock_nanosleep()) = 0 2030.886 ( ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ... 2237.709 (1000.153 ms): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) = 0 ? ( ): crond/1172 ... [continued]: clock_nanosleep()) = 0 3242.699 ( ): pool-gsd-smart/3051 clock_nanosleep(rqtp: { .tv_sec: 1, .tv_nsec: 0 }, rmtp: 0x7f6e7fffec90) ... 2030.886 (2000.385 ms): gpm/991 ... [continued]: clock_nanosleep()) = 0 3728.078 ( ): crond/1172 clock_nanosleep(rqtp: { .tv_sec: 60, .tv_nsec: 0 }, rmtp: 0x7ffe0971dcf0) ... 3242.699 (1000.158 ms): pool-gsd-smart/3051 ... [continued]: clock_nanosleep()) = 0 4031.409 ( ): gpm/991 clock_nanosleep(rqtp: { .tv_sec: 2, .tv_nsec: 0 }, rmtp: 0x7ffcc6f73710) ... 10.133 (5000.375 ms): sleep/327642 ... [continued]: clock_nanosleep()) = 0 Performance counter stats for 'sleep 5': 2,617,347 cycles 1,855,997 instructions # 0.71 insn per cycle 5.002282128 seconds time elapsed 0.000855000 seconds user 0.000852000 seconds sys perf annotate: - Building with binutils' libopcode now is opt-in (BUILD_NONDISTRO=1) for licensing reasons, and we missed a build test on tools/perf/tests makefile. Since we now default to NDEBUG=1, we ended up segfaulting when building with BUILD_NONDISTRO=1 because a needed initialization routine was being "error checked" via an assert. Fix it by explicitly checking the result and aborting instead if it fails. We better back propagate the error, but at least 'perf annotate' on samples collected for a BPF program is back working when perf is built with BUILD_NONDISTRO=1. perf report/top: - Add back TUI hierarchy mode header, that is seen when using 'perf report/top --hierarchy'. - Fix the number of entries for 'e' key in the TUI that was preventing navigation of lines when expanding an entry. perf report/script: - Support cross platform register handling, allowing a perf.data file collected on one architecture to have registers sampled correctly displayed when analysis tools such as 'perf report' and 'perf script' are used on a different architecture. - Fix handling of event attributes in pipe mode, i.e. when one uses: perf record -o - | perf report -i - When no perf.data files are used. - Handle files generated via pipe mode with a version of perf and then read also via pipe mode with a different version of perf, where the event attr record may have changed, use the record size field to properly support this version mismatch. perf probe: - Accessing global variables from uprobes isn't supported, make the error message state that instead of stating that some minimal kernel version is needed to have that feature. This seems just a tool limitation, the kernel probably has all that is needed. perf tests: - Fix a reference count related leak in the dlfilter v0 API where the result of a thread__find_symbol_fb() is not matched with an addr_location__exit() to drop the reference counts of the resolved components (machine, thread, map, symbol, etc). Add a dlfilter test to make sure that doesn't regresses. - Lots of fixes for the 'perf test' written in shell script related to problems found with the shellcheck utility. - Fixes for 'perf test' shell scripts testing features enabled when perf is built with BUILD_BPF_SKEL=1, such as 'perf stat' bpf counters. - Add perf record sample filtering test, things like the following example, that gets implemented as a BPF filter attached to the event: # perf record -e task-clock -c 10000 --filter 'ip < 0xffffffff00000000' - Improve the way the task_analyzer test checks if libtraceevent is linked, using 'perf version --build-options' instead of the more expensinve 'perf record -e "sched:sched_switch"'. - Add support for riscv in the mmap-basic test. (This went as well via the RiscV tree, same contents). libperf: - Implement riscv mmap support (This went as well via the RiscV tree, same contents). perf script: - New tool that converts perf.data files to the firefox profiler format so that one can use the visualizer at https://profiler.firefox.com/. Done by Anup Sharma as part of this year's Google Summer of Code. One can generate the output and upload it to the web interface but Anup also automated everything: perf script gecko -F 99 -a sleep 60 - Support syscall name parsing on arm64. - Print "cgroup" field on the same line as "comm". perf bench: - Add new 'uprobe' benchmark to measure the overhead of uprobes with/without BPF programs attached to it. - breakpoints are not available on power9, skip that test. perf stat: - Add #num_cpus_online literal to be used in 'perf stat' metrics, and add this extra 'perf test' check that exemplifies its purpose: TEST_ASSERT_VAL("#num_cpus_online", expr__parse(&num_cpus_online, ctx, "#num_cpus_online") == 0); TEST_ASSERT_VAL("#num_cpus", expr__parse(&num_cpus, ctx, "#num_cpus") == 0); TEST_ASSERT_VAL("#num_cpus >= #num_cpus_online", num_cpus >= num_cpus_online); Miscellaneous: - Improve tool startup time by lazily reading PMU, JSON, sysfs data. - Improve error reporting in the parsing of events, passing YYLTYPE to error routines, so that the output can show were the parsing error was found. - Add 'perf test' entries to check the parsing of events improvements. - Fix various leak for things detected by -fsanitize=address, mostly things that would be freed at tool exit, including: - Free evsel->filter on the destructor. - Allow tools to register a thread->priv destructor and use it in 'perf trace'. - Free evsel->priv in 'perf trace'. - Free string returned by synthesize_perf_probe_point() when the caller fails to do all it needs. - Adjust various compiler options to not consider errors some warnings when building with broken headers found in things like python, flex, bison, as we otherwise build with -Werror. Some for gcc, some for clang, some for some specific version of those, some for some specific version of flex or bison, or some specific combination of these components, bah. - Allow customization of clang options for BPF target, this helps building on gentoo where there are other oddities where BPF targets gets passed some compiler options intended for the native build, so building with WERROR=0 helps while these oddities are fixed. - Dont pass ERR_PTR() values to perf_session__delete() in 'perf top' and 'perf lock', fixing some segfaults when handling some odd failures. - Add LTO build option. - Fix format of unordered lists in the perf docs (tools/perf/Documentation) - Overhaul the bison files, using constructs such as YYNOMEM. - Remove unused tokens from the bison .y files. - Add more comments to various structs. - A few LoongArch enablement patches. Vendor events (JSON): - Add JSON metrics for Yitian 710 DDR (aarch64). Things like: EventName, BriefDescription visible_window_limit_reached_rd, "At least one entry in read queue reaches the visible window limit.", visible_window_limit_reached_wr, "At least one entry in write queue reaches the visible window limit.", op_is_dqsosc_mpc , "A DQS Oscillator MPC command to DRAM.", op_is_dqsosc_mrr , "A DQS Oscillator MRR command to DRAM.", op_is_tcr_mrr , "A Temperature Compensated Refresh(TCR) MRR command to DRAM.", - Add AmpereOne metrics (aarch64). - Update N2 and V2 metrics (aarch64) and events using Arm telemetry repo. - Update scale units and descriptions of common topdown metrics on aarch64. Things like: - "MetricExpr": "stall_slot_frontend / (#slots * cpu_cycles)", - "BriefDescription": "Frontend bound L1 topdown metric", + "MetricExpr": "100 * (stall_slot_frontend / (#slots * cpu_cycles))", + "BriefDescription": "This metric is the percentage of total slots that were stalled due to resource constraints in the frontend of the processor.", - Update events for intel: meteorlake to 1.04, sapphirerapids to 1.15, Icelake+ metric constraints. - Update files for the power10 platform" * tag 'perf-tools-for-v6.6-1-2023-09-05' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools: (217 commits) perf parse-events: Fix driver config term perf parse-events: Fixes relating to no_value terms perf parse-events: Fix propagation of term's no_value when cloning perf parse-events: Name the two term enums perf list: Don't print Unit for "default_core" perf vendor events intel: Fix modifier in tma_info_system_mem_parallel_reads for skylake perf dlfilter: Avoid leak in v0 API test use of resolve_address() perf metric: Add #num_cpus_online literal perf pmu: Remove str from perf_pmu_alias perf parse-events: Make common term list to strbuf helper perf parse-events: Minor help message improvements perf pmu: Avoid uninitialized use of alias->str perf jevents: Use "default_core" for events with no Unit perf test stat_bpf_counters_cgrp: Enhance perf stat cgroup BPF counter test perf test shell stat_bpf_counters: Fix test on Intel perf test shell record_bpf_filter: Skip 6.2 kernel libperf: Get rid of attr.id field perf tools: Convert to perf_record_header_attr_id() libperf: Add perf_record_header_attr_id() perf tools: Handle old data in PERF_RECORD_ATTR ...
2023-09-08Merge tag 'xarray-6.6' of git://git.infradead.org/users/willy/xarrayLinus Torvalds
Pull xarray fixes from Matthew Wilcox: - Fix a bug encountered by people using bittorrent where they'd get NULL pointer dereferences on page cache lookups when using XFS - Two documentation fixes * tag 'xarray-6.6' of git://git.infradead.org/users/willy/xarray: idr: fix param name in idr_alloc_cyclic() doc xarray: Document necessary flag in alloc functions XArray: Do not return sibling entries from xa_load()
2023-09-08selftests/ftrace: Fix dependencies for some of the synthetic event testsNaveen N Rao
Commit b81a3a100cca1b ("tracing/histogram: Add simple tests for stacktrace usage of synthetic events") changed the output text in tracefs README, but missed updating some of the dependencies specified in selftests. This causes some of the tests to exit as unsupported. Fix this by changing the grep pattern. Since we want these tests to work on older kernels, match only against the common last part of the pattern. Link: https://lore.kernel.org/linux-trace-kernel/20230614091046.2178539-1-naveen@kernel.org Cc: <linux-kselftest@vger.kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Fixes: b81a3a100cca ("tracing/histogram: Add simple tests for stacktrace usage of synthetic events") Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2023-09-08bpftool: Fix -Wcast-qual warningDenys Zagorui
This cast was made by purpose for older libbpf where the bpf_object_skeleton field is void * instead of const void * to eliminate a warning (as i understand -Wincompatible-pointer-types-discards-qualifiers) but this cast introduces another warning (-Wcast-qual) for libbpf where data field is const void * It makes sense for bpftool to be in sync with libbpf from kernel sources Signed-off-by: Denys Zagorui <dzagorui@cisco.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20230907090210.968612-1-dzagorui@cisco.com
2023-09-08selftests/bpf: trace_helpers.c: Add a global ksyms initialization mutexRong Tao
As Jirka said [0], we just need to make sure that global ksyms initialization won't race. [0] https://lore.kernel.org/lkml/ZPCbAs3ItjRd8XVh@krava/ Signed-off-by: Rong Tao <rongtao@cestc.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/tencent_5D0A837E219E2CFDCB0495DAD7D5D1204407@qq.com
2023-09-08selftests/bpf: trace_helpers.c: Optimize kallsyms cacheRong Tao
Static ksyms often have problems because the number of symbols exceeds the MAX_SYMS limit. Like changing the MAX_SYMS from 300000 to 400000 in commit e76a014334a6("selftests/bpf: Bump and validate MAX_SYMS") solves the problem somewhat, but it's not the perfect way. This commit uses dynamic memory allocation, which completely solves the problem caused by the limitation of the number of kallsyms. At the same time, add APIs: load_kallsyms_local() ksym_search_local() ksym_get_addr_local() free_kallsyms_local() There are used to solve the problem of selftests/bpf updating kallsyms after attach new symbols during testmod testing. Signed-off-by: Rong Tao <rongtao@cestc.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/tencent_C9BDA68F9221F21BE4081566A55D66A9700A@qq.com
2023-09-08Merge tag 'landlock-6.6-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mic/linux Pull landlock updates from Mickaël Salaün: "One test fix and a __counted_by annotation" * tag 'landlock-6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mic/linux: selftests/landlock: Fix a resource leak landlock: Annotate struct landlock_rule with __counted_by
2023-09-08selftests: Keep symlinks, when possibleBjörn Töpel
When kselftest is built/installed with the 'gen_tar' target, rsync is used for the installation step to copy files. Extra care is needed for tests that have symlinks. Commit ae108c48b5d2 ("selftests: net: Fix cross-tree inclusion of scripts") added '-L' (transform symlink into referent file/dir) to rsync, to fix dangling links. However, that broke some tests where the symlink (being a symlink) is part of the test (e.g. exec:execveat). Use rsync's '--copy-unsafe-links' that does right thing. Fixes: ae108c48b5d2 ("selftests: net: Fix cross-tree inclusion of scripts") Signed-off-by: Björn Töpel <bjorn@rivosinc.com> Reviewed-by: Benjamin Poirier <bpoirier@nvidia.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>