summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-02-01riscv: disable generation of unwind tablesAndreas Schwab
GCC 13 will enable -fasynchronous-unwind-tables by default on riscv. In the kernel, we don't have any use for unwind tables yet, so disable them. More importantly, the .eh_frame section brings relocations (R_RISC_32_PCREL, R_RISCV_SET{6,8,16}, R_RISCV_SUB{6,8,16}) into modules that we are not prepared to handle. Signed-off-by: Andreas Schwab <schwab@suse.de> Link: https://lore.kernel.org/r/mvmzg9xybqu.fsf@suse.de Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-02-01riscv: kprobe: Fixup kernel panic when probing an illegal positionGuo Ren
The kernel would panic when probed for an illegal position. eg: (CONFIG_RISCV_ISA_C=n) echo 'p:hello kernel_clone+0x16 a0=%a0' >> kprobe_events echo 1 > events/kprobes/hello/enable cat trace Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: __do_sys_newfstatat+0xb8/0xb8 CPU: 0 PID: 111 Comm: sh Not tainted 6.2.0-rc1-00027-g2d398fe49a4d #490 Hardware name: riscv-virtio,qemu (DT) Call Trace: [<ffffffff80007268>] dump_backtrace+0x38/0x48 [<ffffffff80c5e83c>] show_stack+0x50/0x68 [<ffffffff80c6da28>] dump_stack_lvl+0x60/0x84 [<ffffffff80c6da6c>] dump_stack+0x20/0x30 [<ffffffff80c5ecf4>] panic+0x160/0x374 [<ffffffff80c6db94>] generic_handle_arch_irq+0x0/0xa8 [<ffffffff802deeb0>] sys_newstat+0x0/0x30 [<ffffffff800158c0>] sys_clone+0x20/0x30 [<ffffffff800039e8>] ret_from_syscall+0x0/0x4 ---[ end Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: __do_sys_newfstatat+0xb8/0xb8 ]--- That is because the kprobe's ebreak instruction broke the kernel's original code. The user should guarantee the correction of the probe position, but it couldn't make the kernel panic. This patch adds arch_check_kprobe in arch_prepare_kprobe to prevent an illegal position (Such as the middle of an instruction). Fixes: c22b0bcb1dd0 ("riscv: Add kprobes supported") Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Reviewed-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/r/20230201040604.3390509-1-guoren@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-02-01nfp: correct cleanup related to DCB resourcesHuayu Chen
This patch corrects two oversights relating to releasing resources and DCB initialisation. 1. If mapping of the dcbcfg_tbl area fails: an error should be propagated, allowing partial initialisation (probe) to be unwound. 2. Conversely, if where dcbcfg_tbl is successfully mapped: it should be unmapped in nfp_nic_dcb_clean() which is called via various error cleanup paths, and shutdown or removal of the PCIE device. Fixes: 9b7fe8046d74 ("nfp: add DCB IEEE support") Signed-off-by: Huayu Chen <huayu.chen@corigine.com> Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/20230131163033.981937-1-simon.horman@corigine.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-01nfp: flower: avoid taking mutex in atomic contextYanguo Li
A mutex may sleep, which is not permitted in atomic context. Avoid a case where this may arise by moving the to nfp_flower_lag_get_info_from_netdev() in nfp_tun_write_neigh() spinlock. Fixes: abc210952af7 ("nfp: flower: tunnel neigh support bond offload") Reported-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Yanguo Li <yanguo.li@corigine.com> Signed-off-by: Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/20230131080313.2076060-1-simon.horman@corigine.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-01ipv6: ICMPV6: Use swap() instead of open coding itJiapeng Chong
Swap is a function interface that provides exchange function. To avoid code duplication, we can use swap function. ./net/ipv6/icmp.c:344:25-26: WARNING opportunity for swap(). Reported-by: Abaci Robot <abaci@linux.alibaba.com> Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3896 Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20230131063456.76302-1-jiapeng.chong@linux.alibaba.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-01Merge branch ↵Jakub Kicinski
'ip-ip6_gre-fix-gre-tunnels-not-generating-ipv6-link-local-addresses' Thomas Winter says: ==================== ip/ip6_gre: Fix GRE tunnels not generating IPv6 link local addresses For our point-to-point GRE tunnels, they have IN6_ADDR_GEN_MODE_NONE when they are created then we set IN6_ADDR_GEN_MODE_EUI64 when they come up to generate the IPv6 link local address for the interface. Recently we found that they were no longer generating IPv6 addresses. Also, non-point-to-point tunnels were not generating any IPv6 link local address and instead generating an IPv6 compat address, breaking IPv6 communication on the tunnel. These failures were caused by commit e5dd729460ca and this patch set aims to resolve these issues. ==================== Link: https://lore.kernel.org/r/20230131034646.237671-1-Thomas.Winter@alliedtelesis.co.nz Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-01ip/ip6_gre: Fix non-point-to-point tunnel not generating IPv6 link local addressThomas Winter
We recently found that our non-point-to-point tunnels were not generating any IPv6 link local address and instead generating an IPv6 compat address, breaking IPv6 communication on the tunnel. Previously, addrconf_gre_config always would call addrconf_addr_gen and generate a EUI64 link local address for the tunnel. Then commit e5dd729460ca changed the code path so that add_v4_addrs is called but this only generates a compat IPv6 address for non-point-to-point tunnels. I assume the compat address is specifically for SIT tunnels so have kept that only for SIT - GRE tunnels now always generate link local addresses. Fixes: e5dd729460ca ("ip/ip6_gre: use the same logic as SIT interfaces when computing v6LL address") Signed-off-by: Thomas Winter <Thomas.Winter@alliedtelesis.co.nz> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-01ip/ip6_gre: Fix changing addr gen mode not generating IPv6 link local addressThomas Winter
For our point-to-point GRE tunnels, they have IN6_ADDR_GEN_MODE_NONE when they are created then we set IN6_ADDR_GEN_MODE_EUI64 when they come up to generate the IPv6 link local address for the interface. Recently we found that they were no longer generating IPv6 addresses. This issue would also have affected SIT tunnels. Commit e5dd729460ca changed the code path so that GRE tunnels generate an IPv6 address based on the tunnel source address. It also changed the code path so GRE tunnels don't call addrconf_addr_gen in addrconf_dev_config which is called by addrconf_sysctl_addr_gen_mode when the IN6_ADDR_GEN_MODE is changed. This patch aims to fix this issue by moving the code in addrconf_notify which calls the addr gen for GRE and SIT into a separate function and calling it in the places that expect the IPv6 address to be generated. The previous addrconf_dev_config is renamed to addrconf_eth_config since it only expected eth type interfaces and follows the addrconf_gre/sit_config format. A part of this changes means that the loopback address will be attempted to be configured when changing addr_gen_mode for lo. This should not be a problem because the address should exist anyway and if does already exist then no error is produced. Fixes: e5dd729460ca ("ip/ip6_gre: use the same logic as SIT interfaces when computing v6LL address") Signed-off-by: Thomas Winter <Thomas.Winter@alliedtelesis.co.nz> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-01drm/amd/display: Properly handle additional cases where DCN is not supportedAlex Deucher
There could be boards with DCN listed in IP discovery, but no display hardware actually wired up. In this case the vbios display table will not be populated. Detect this case and skip loading DM when we detect it. v2: Mark DCN as harvested as well so other display checks elsewhere in the driver are handled properly. Cc: Aurabindo Pillai <aurabindo.pillai@amd.com> Reviewed-by: Aurabindo Pillai <aurabindo.pillai@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2023-02-01drm/amdgpu: Enable vclk dclk node for gc11.0.3Yiqing Yao
These sysfs nodes are tested supported, so enable them. Signed-off-by: Yiqing Yao <yiqing.yao@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2023-02-01drm/amd: Fix initialization for nbio 4.3.0Mario Limonciello
A mistake has been made on some boards with NBIO 4.3.0 where some NBIO registers aren't properly set by the hardware. Ensure that they're set during initialization. Cc: Natikar Basavaraj <Basavaraj.Natikar@amd.com> Tested-by: Satyanarayana ReddyTVN <Satyanarayana.ReddyTVN@amd.com> Tested-by: Rutvij Gajjar <Rutvij.Gajjar@amd.com> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org # 6.1.x
2023-02-01drm/amdgpu: enable HDP SD for gfx 11.0.3Evan Quan
Enable HDP clock gating control for gfx 11.0.3. Signed-off-by: Evan Quan <evan.quan@amd.com> Reviewed-by: Feifei Xu <Feifei.Xu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2023-02-01drm/amd/pm: drop unneeded dpm features disablement for SMU 13.0.4/11Tim Huang
PMFW will handle the features disablement properly for gpu reset case, driver involvement may cause some unexpected issues. Cc: stable@vger.kernel.org # 6.1 Signed-off-by: Tim Huang <tim.huang@amd.com> Reviewed-by: Yifan Zhang <yifan1.zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2023-02-01drm/amd/display: Reset DMUB mailbox SW state after HW resetNicholas Kazlauskas
[Why] Otherwise we can be out of sync with what's in the hardware, leading to us rerunning every command that's presently in the ringbuffer. [How] Reset software state for the mailboxes in hw_reset callback. This is already done as part of the mailbox init in hw_init, but we do need to remember to reset the last cached wptr value as well here. Reviewed-by: Hansen Dsouza <hansen.dsouza@amd.com> Acked-by: Alex Hung <alex.hung@amd.com> Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2023-02-01drm/amd/display: Unassign does_plane_fit_in_mall function from dcn3.2George Shen
[Why] The hwss function does_plane_fit_in_mall not applicable to dcn3.2 asics. Using it with dcn3.2 can result in undefined behaviour. [How] Assign the function pointer to NULL. Reviewed-by: Alvin Lee <Alvin.Lee2@amd.com> Acked-by: Alex Hung <alex.hung@amd.com> Signed-off-by: George Shen <george.shen@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2023-02-01drm/amd/display: Adjust downscaling limits for dcn314Daniel Miess
[Why] Lower max_downscale_ratio and ARGB888 downscale factor to prevent cases where underflow may occur on dcn314 [How] Set max_downscale_ratio to 400 and ARGB downscale factor to 250 for dcn314 Reviewed-by: Nicholas Kazlauskas <Nicholas.Kazlauskas@amd.com> Acked-by: Alex Hung <alex.hung@amd.com> Signed-off-by: Daniel Miess <Daniel.Miess@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2023-02-01drm/amd/display: Add missing brackets in calculationDaniel Miess
[Why] Brackets missing in the calculation for MIN_DST_Y_NEXT_START [How] Add missing brackets for this calculation Reviewed-by: Nicholas Kazlauskas <Nicholas.Kazlauskas@amd.com> Acked-by: Alex Hung <alex.hung@amd.com> Signed-off-by: Daniel Miess <Daniel.Miess@amd.com> Tested-by: Daniel Wheeler <daniel.wheeler@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2023-02-01drm/amdgpu: update wave data type to 3 for gfx11Graham Sider
SQ_WAVE_INST_DW0 isn't present on gfx11 compared to gfx10, so update wave data type to signify a difference. Signed-off-by: Graham Sider <Graham.Sider@amd.com> Reviewed-by: Mukul Joshi <Mukul.Joshi@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org # 6.1.x
2023-02-01bnxt_en: Remove runtime interrupt vector allocationAjit Khaparde
Modified the bnxt_en code to create and pre-configure RDMA devices with the right MSI-X vector count for the ROCE driver to use. This is to align the ROCE driver to the auxiliary device model which will simply bind the driver without getting into PCI-related handling. All PCI-related logic will now be in the bnxt_en driver. Suggested-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
2023-02-01RDMA/bnxt_re: Remove the sriov config callbackAjit Khaparde
Remove the SRIOV config callback which the bnxt_en was calling to reconfigure the chip resources for a PF device when VFs are created. The code is now modified to provision the VF resources based on the total VF count instead of the actual VF count. This allows the SRIOV config callback to be removed from the list of ulp_ops. Suggested-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
2023-02-01bnxt_en: Remove struct bnxt access from RoCE driverHongguang Gao
Decouple RoCE driver from directly accessing L2's private bnxt structure. Move the fields needed by RoCE driver into bnxt_en_dev. They'll be passed to RoCE driver by bnxt_rdma_aux_device_add() function. Signed-off-by: Hongguang Gao <hongguang.gao@broadcom.com> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com> Reviewed-by: Selvin Xavier <selvin.xavier@broadcom.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
2023-02-01bnxt_en: Use auxiliary bus calls over proprietary callsAjit Khaparde
Wherever possible use the function ops provided by auxiliary bus instead of using proprietary ops. Defined bnxt_re_suspend and bnxt_re_resume calls which can be invoked by the bnxt_en driver instead of the ULP stop/start calls. Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com> Reviewed-by: Selvin Xavier <selvin.xavier@broadcom.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
2023-02-01bnxt_en: Use direct API instead of indirectionAjit Khaparde
For a single ULP user there is no need for complicating function indirection calls. Remove all this complexity in favour of direct function calls exported by the bnxt_en driver. This allows to simplify the code greatly. Also remove unused ulp_async_notifier. Suggested-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com> Reviewed-by: Selvin Xavier <selvin.xavier@broadcom.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
2023-02-01bnxt_en: Remove usage of ulp_idAjit Khaparde
Since the driver continues to use the single ULP model, the extra complexity and indirection is unnecessary. Remove the usage of ulp_id from the code. Suggested-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com> Reviewed-by: Selvin Xavier <selvin.xavier@broadcom.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
2023-02-01RDMA/bnxt_re: Use auxiliary driver interfaceAjit Khaparde
Use auxiliary driver interface for driver load, unload ROCE driver. The driver does not need to register the interface using the netdev notifier anymore. Removed the bnxt_re_dev_list which is not needed. Currently probe, remove and shutdown ops have been implemented for the auxiliary device. Also remove exccessve validation checks for rdev. Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com> Reviewed-by: Selvin Xavier <selvin.xavier@broadcom.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
2023-02-01bnxt_en: Add auxiliary driver supportAjit Khaparde
Add auxiliary driver support. An auxiliary device will be created if the hardware indicates support for RDMA. The bnxt_ulp_probe() function has been removed and a new bnxt_rdma_aux_device_add() function has been added. The bnxt_free_msix_vecs() and bnxt_req_msix_vecs() will now hold the RTNL lock when they call the bnxt_close_nic()and bnxt_open_nic() since the device close and open need to be protected under RTNL lock. The operations between the bnxt_en and bnxt_re will be protected using the en_ops_lock. This will be used by the bnxt_re driver in a follow-on patch to create ROCE interfaces. Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com> Reviewed-by: Selvin Xavier <selvin.xavier@broadcom.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
2023-02-01blk-cgroup: don't update io stat for root cgroupMing Lei
We source root cgroup stats from the system-wide stats, see blkcg_print_stat and blkcg_rstat_flush, so don't update io state for root cgroup. Fixes blkg leak issue introduced in commit 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()") which starts to grab blkg's reference when adding iostat_cpu into percpu blkcg list, but this state won't be consumed by blkcg_rstat_flush() where the blkg reference is dropped. Tested-by: Bart van Assche <bvanassche@acm.org> Reported-by: Bart van Assche <bvanassche@acm.org> Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()") Cc: Tejun Heo <tj@kernel.org> Cc: Waiman Long <longman@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20230202021804.278582-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-02-02powerpc/64s: Reconnect tlb_flush() to hash__tlb_flush()Michael Ellerman
Commit baf1ed24b27d ("powerpc/mm: Remove empty hash__ functions") removed some empty hash MMU flushing routines, but got a bit overeager and also removed the call to hash__tlb_flush() from tlb_flush(). In regular use this doesn't lead to any noticable breakage, which is a little concerning. Presumably there are flushes happening via other paths such as arch_leave_lazy_mmu_mode(), and/or a bit of luck. Fix it by reinstating the call to hash__tlb_flush(). Fixes: baf1ed24b27d ("powerpc/mm: Remove empty hash__ functions") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20230131111407.806770-1-mpe@ellerman.id.au
2023-02-02selftests/bpf: xdp_hw_metadata use strncpy for ifnameJesper Dangaard Brouer
The ifname char pointer is taken directly from the command line as input and the string is copied directly into struct ifreq via strcpy. This makes it easy to corrupt other members of ifreq and generally do stack overflows. Most often the ioctl will fail with: ./xdp_hw_metadata: ioctl(SIOCETHTOOL): Bad address As people will likely copy-paste code for getting NIC queue channels (rxq_num) and enabling HW timestamping (hwtstamp_ioctl) lets make this code a bit more secure by using strncpy. Fixes: 297a3f124155 ("selftests/bpf: Simple program to dump XDP RX metadata") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/167527272543.937063.16993147790832546209.stgit@firesoul
2023-02-02selftests/bpf: xdp_hw_metadata correct status value in error(3)Jesper Dangaard Brouer
The glibc error reporting function error(): void error(int status, int errnum, const char *format, ...); The status argument should be a positive value between 0-255 as it is passed over to the exit(3) function as the value as the shell exit status. The least significant byte of status (i.e., status & 0xFF) is returned to the shell parent. Fix this by using 1 instead of -1. As 1 corresponds to C standard constant EXIT_FAILURE. Fixes: 297a3f124155 ("selftests/bpf: Simple program to dump XDP RX metadata") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/167527272038.937063.9137108142012298120.stgit@firesoul
2023-02-02selftests/bpf: xdp_hw_metadata cleanup cause segfaultJesper Dangaard Brouer
Using xdp_hw_metadata I experince Segmentation fault after seeing "detaching bpf program....". On my system the segfault happened when accessing bpf_obj->skeleton in xdp_hw_metadata__destroy(bpf_obj) call. That doesn't make any sense as this memory have not been freed by program at this point in time. Prior to calling xdp_hw_metadata__destroy(bpf_obj) the function close_xsk() is called for each RX-queue xsk. The real bug lays in close_xsk() that unmap via munmap() the wrong memory pointer. The call xsk_umem__delete(xsk->umem) will free xsk->umem, thus the call to munmap(xsk->umem, UMEM_SIZE) will have unpredictable behavior. And man page explain subsequent references to these pages will generate SIGSEGV. Unmapping xsk->umem_area instead removes the segfault. Fixes: 297a3f124155 ("selftests/bpf: Simple program to dump XDP RX metadata") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/167527271533.937063.5717065138099679142.stgit@firesoul
2023-02-02selftests/bpf: xdp_hw_metadata clear metadata when -EOPNOTSUPPJesper Dangaard Brouer
The AF_XDP userspace part of xdp_hw_metadata see non-zero as a signal of the availability of rx_timestamp and rx_hash in data_meta area. The kernel-side BPF-prog code doesn't initialize these members when kernel returns an error e.g. -EOPNOTSUPP. This memory area is not guaranteed to be zeroed, and can contain garbage/previous values, which will be read and interpreted by AF_XDP userspace side. Tested this on different drivers. The experiences are that for most packets they will have zeroed this data_meta area, but occasionally it will contain garbage data. Example of failure tested on ixgbe: poll: 1 (0) xsk_ring_cons__peek: 1 0x18ec788: rx_desc[0]->addr=100000000008000 addr=8100 comp_addr=8000 rx_hash: 3697961069 rx_timestamp: 9024981991734834796 (sec:9024981991.7348) 0x18ec788: complete idx=8 addr=8000 Converting to date: date -d @9024981991 2255-12-28T20:26:31 CET I choose a simple fix in this patch. When kfunc fails or isn't supported assign zero to the corresponding struct meta value. It's up to the individual BPF-programmer to do something smarter e.g. that fits their use-case, like getting a software timestamp and marking a flag that gives the type of timestamp. Fixes: 297a3f124155 ("selftests/bpf: Simple program to dump XDP RX metadata") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/167527271027.937063.5177725618616476592.stgit@firesoul
2023-02-02selftests/bpf: Fix unmap bug in prog_tests/xdp_metadata.cJesper Dangaard Brouer
The function close_xsk() unmap via munmap() the wrong memory pointer. The call xsk_umem__delete(xsk->umem) have already freed xsk->umem. Thus the call to munmap(xsk->umem, UMEM_SIZE) will have unpredictable behavior that can lead to Segmentation fault elsewhere, as man page explain subsequent references to these pages will generate SIGSEGV. Fixes: e2a46d54d7a1 ("selftests/bpf: Verify xdp_metadata xdp->af_xdp path") Reported-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/167527517464.938135.13750760520577765269.stgit@firesoul
2023-02-02Merge branch 'kfunc-annotation'Daniel Borkmann
David Vernet says: ==================== This is v3 of the patchset [0]. v2 can be found at [1]. [0]: https://lore.kernel.org/bpf/Y7kCsjBZ%2FFrsWW%2Fe@maniforge.lan/T/ [1]: https://lore.kernel.org/lkml/20230123171506.71995-1-void@manifault.com/ Changelog: ---------- v2 -> v3: - Go back to the __bpf_kfunc approach from v1. The BPF_KFUNC macro received pushback as it didn't match the more typical EXPORT_SYMBOL* APIs used elsewhere in the kernel. It's the longer term plan, but for now we're proposing something less controversial to fix kfuncs and BTF encoding. - Add __bpf_kfunc macro to newly added cpumask kfuncs. - Add __bpf_kfunc macro to newly added XDP metadata kfuncs, which were failing to be BTF encoded in the thread in [2]. - Update patch description(s) to reference the discussions in [2]. - Add a selftest that validates that a static kfunc with unused args is properly BTF encoded and can be invoked. [2]: https://lore.kernel.org/all/fe5d42d1-faad-d05e-99ad-1c2c04776950@oracle.com/ v1 -> v2: - Wrap entire function signature in BPF_KFUNC macro instead of using __bpf_kfunc tag (Kumar) - Update all kfunc definitions to use this macro. - Update kfuncs.rst documentation to describe and illustrate the macro. - Also clean up a few small parts of kfuncs.rst, e.g. some grammar, and in general making it a bit tighter. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2023-02-02selftests/bpf: Add testcase for static kfunc with unused argDavid Vernet
kfuncs are allowed to be static, or not use one or more of their arguments. For example, bpf_xdp_metadata_rx_hash() in net/core/xdp.c is meant to be implemented by drivers, with the default implementation just returning -EOPNOTSUPP. As described in [0], such kfuncs can have their arguments elided, which can cause BTF encoding to be skipped. The new __bpf_kfunc macro should address this, and this patch adds a selftest which verifies that a static kfunc with at least one unused argument can still be encoded and invoked by a BPF program. Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230201173016.342758-5-void@manifault.com
2023-02-02bpf: Add __bpf_kfunc tag to all kfuncsDavid Vernet
Now that we have the __bpf_kfunc tag, we should use add it to all existing kfuncs to ensure that they'll never be elided in LTO builds. Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230201173016.342758-4-void@manifault.com
2023-02-02bpf: Document usage of the new __bpf_kfunc macroDavid Vernet
Now that the __bpf_kfunc macro has been added to linux/btf.h, include a blurb about it in the kfuncs.rst file. In order for the macro to successfully render with .. kernel-doc, we'll also need to add it to the c_id_attributes array. Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230201173016.342758-3-void@manifault.com
2023-02-02bpf: Add __bpf_kfunc tag for marking kernel functions as kfuncsDavid Vernet
kfuncs are functions defined in the kernel, which may be invoked by BPF programs. They may or may not also be used as regular kernel functions, implying that they may be static (in which case the compiler could e.g. inline it away, or elide one or more arguments), or it could have external linkage, but potentially be elided in an LTO build if a function is observed to never be used, and is stripped from the final kernel binary. This has already resulted in some issues, such as those discussed in [0] wherein changes in DWARF that identify when a parameter has been optimized out can break BTF encodings (and in general break the kfunc). [0]: https://lore.kernel.org/all/1675088985-20300-2-git-send-email-alan.maguire@oracle.com/ We therefore require some convenience macro that kfunc developers can use just add to their kfuncs, and which will prevent all of the above issues from happening. This is in contrast with what we have today, where some kfunc definitions have "noinline", some have "__used", and others are static and have neither. Note that longer term, this mechanism may be replaced by a macro that more closely resembles EXPORT_SYMBOL_GPL(), as described in [1]. For now, we're going with this shorter-term approach to fix existing issues in kfuncs. [1]: https://lore.kernel.org/lkml/Y9AFT4pTydKh+PD3@maniforge.lan/ Note as well that checkpatch complains about this patch with the following: ERROR: Macros with complex values should be enclosed in parentheses +#define __bpf_kfunc __used noinline There seems to be a precedent for using this pattern in other places such as compiler_types.h (see e.g. __randomize_layout and noinstr), so it seems appropriate. Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230201173016.342758-2-void@manifault.com
2023-02-01Merge branch 'xdp-ice-mbuf'Daniel Borkmann
Maciej Fijalkowski says: ==================== Although this work started as an effort to add multi-buffer XDP support to ice driver, as usual it turned out that some other side stuff needed to be addressed, so let me give you an overview. First patch adjusts legacy-rx in a way that it will be possible to refer to skb_shared_info being at the end of the buffer when gathering up frame fragments within xdp_buff. Then, patches 2-9 prepare ice driver in a way that actual multi-buffer patches will be easier to swallow. 10 and 11 are the meat. What is worth mentioning is that this set actually *fixes* things as patch 11 removes the logic based on next_dd/rs and we previously stepped away from this for ice_xmit_zc(). Currently, AF_XDP ZC XDP_TX workload is off as there are two cleaning sides that can be triggered and two of them work on different internal logic. This set unifies that and allows us to improve the performance by 2x with a trick on the last (13) patch. 12th is a simple cleanup of no longer fields from Tx ring. I might be wrong but I have not seen anyone reporting performance impact among patches that add XDP multi-buffer support to a particular driver. Numbers below were gathered via xdp_rxq_info and xdp_redirect_map on 1500 MTU: XDP_DROP +1% XDP_PASS -1,2% XDP_TX -0,5% XDP_REDIRECT -3,3% Cherry on top, which is not directly related to mbuf support (last patch): XDP_TX ZC +126% Target the we agreed on was to not degrade performance for any action by anything that would be over 5%, so our goal was met. Basically this set keeps the performance where it was. Redirect is slower due to more frequent tail bumps. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
2023-02-01ice: xsk: Do not convert to buff to frame for XDP_TXMaciej Fijalkowski
Let us store pointer to xdp_buff that came from xsk_buff_pool on tx_buf so that it will be possible to recycle it via xsk_buff_free() on Tx cleaning side. This way it is not necessary to do expensive copy to another xdp_buff backed by a newly allocated page. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-14-maciej.fijalkowski@intel.com
2023-02-01ice: Remove next_{dd,rs} fields from ice_tx_ringMaciej Fijalkowski
Now that both ZC and standard XDP data paths stopped using Tx logic based on next_dd and next_rs fields, we can safely remove these fields and shrink Tx ring structure. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-13-maciej.fijalkowski@intel.com
2023-02-01ice: Add support for XDP multi-buffer on Tx sideMaciej Fijalkowski
Similarly as for Rx side in previous patch, logic on XDP Tx in ice driver needs to be adjusted for multi-buffer support. Specifically, the way how HW Tx descriptors are produced and cleaned. Currently, XDP_TX works on strict ring boundaries, meaning it sets RS bit (on producer side) / looks up DD bit (on consumer/cleaning side) every quarter of the ring. It means that if for example multi buffer frame would span across the ring quarter boundary (say that frame consists of 4 frames and we start from 62 descriptor where ring is sized to 256 entries), RS bit would be produced in the middle of multi buffer frame, which would be a broken behavior as it needs to be set on the last descriptor of the frame. To make it work, set RS bit at the last descriptor from the batch of frames that XDP_TX action was used on and make the first entry remember the index of last descriptor with RS bit set. This way, cleaning side can take the index of descriptor with RS bit, look up DD bit's presence and clean from first entry to last. In order to clean up the code base introduce the common ice_set_rs_bit() which will return index of descriptor that got RS bit produced on so that standard driver can store this within proper ice_tx_buf and ZC driver can simply ignore return value. Co-developed-by: Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> Signed-off-by: Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-12-maciej.fijalkowski@intel.com
2023-02-01ice: Add support for XDP multi-buffer on Rx sideMaciej Fijalkowski
Ice driver needs to be a bit reworked on Rx data path in order to support multi-buffer XDP. For skb path, it currently works in a way that Rx ring carries pointer to skb so if driver didn't manage to combine fragmented frame at current NAPI instance, it can restore the state on next instance and keep looking for last fragment (so descriptor with EOP bit set). What needs to be achieved is that xdp_buff needs to be combined in such way (linear + frags part) in the first place. Then skb will be ready to go in case of XDP_PASS or BPF program being not present on interface. If BPF program is there, it would work on multi-buffer XDP. At this point xdp_buff resides directly on Rx ring, so given the fact that skb will be built straight from xdp_buff, there will be no further need to carry skb on Rx ring. Besides removing skb pointer from Rx ring, lots of members have been moved around within ice_rx_ring. First and foremost reason was to place rx_buf with xdp_buff on the same cacheline. This means that once we touch rx_buf (which is a preceding step before touching xdp_buff), xdp_buff will already be hot in cache. Second thing was that xdp_rxq is used rather rarely and it occupies a separate cacheline, so maybe it is better to have it at the end of ice_rx_ring. Other change that affects ice_rx_ring is the introduction of ice_rx_ring::first_desc. Its purpose is twofold - first is to propagate rx_buf->act to all the parts of current xdp_buff after running XDP program, so that ice_put_rx_buf() that got moved out of the main Rx processing loop will be able to tak an appriopriate action on each buffer. Second is for ice_construct_skb(). ice_construct_skb() has a copybreak mechanism which had an explicit impact on xdp_buff->skb conversion in the new approach when legacy Rx flag is toggled. It works in a way that linear part is 256 bytes long, if frame is bigger than that, remaining bytes are going as a frag to skb_shared_info. This means while memcpying frags from xdp_buff to newly allocated skb, care needs to be taken when picking the destination frag array entry. Upon the time ice_construct_skb() is called, when dealing with fragmented frame, current rx_buf points to the *last* fragment, but copybreak needs to be done against the first one. That's where ice_rx_ring::first_desc helps. When frame building spans across NAPI polls (DD bit is not set on current descriptor and xdp->data is not NULL) with current Rx buffer handling state there might be some problems. Since calls to ice_put_rx_buf() were pulled out of the main Rx processing loop and were scoped from cached_ntc to current ntc, remember that now mentioned function relies on rx_buf->act, which is set within ice_run_xdp(). ice_run_xdp() is called when EOP bit was found, so currently we could put Rx buffer with rx_buf->act being *uninitialized*. To address this, change scoping to rely on first_desc on both boundaries instead. This also implies that cleaned_count which is used as an input to ice_alloc_rx_buffers() and tells how many new buffers should be refilled has to be adjusted. If it stayed as is, what could happen is a case where ntc would go over ntu. Therefore, remove cleaned_count altogether and use against allocing routine newly introduced ICE_RX_DESC_UNUSED() macro which is an equivalent of ICE_DESC_UNUSED() dedicated for Rx side and based on struct ice_rx_ring::first_desc instead of next_to_clean. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-11-maciej.fijalkowski@intel.com
2023-02-01ice: Use xdp->frame_sz instead of recalculating truesizeMaciej Fijalkowski
SKB path calculates truesize on three different functions, which could be avoided as xdp_buff carries the already calculated truesize under xdp_buff::frame_sz. If ice_add_rx_frag() is adjusted to take the xdp_buff as an input just like functions responsible for creating sk_buff initially, codebase could be simplified by removing these redundant recalculations and rely on xdp_buff::frame_sz instead. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-10-maciej.fijalkowski@intel.com
2023-02-01ice: Do not call ice_finalize_xdp_rx() unnecessarilyMaciej Fijalkowski
Currently ice_finalize_xdp_rx() is called only when xdp_prog is present on VSI, which is a good thing. However, this optimization can be enhanced and check only if any of the XDP_TX/XDP_REDIRECT took place in current Rx processing. Non-zero value of @xdp_xmit indicates that xdp_prog is present on VSI. This way XDP_DROP-based workloads will not suffer from unnecessary calls to external function. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-9-maciej.fijalkowski@intel.com
2023-02-01ice: Use ice_max_xdp_frame_size() in ice_xdp_setup_prog()Maciej Fijalkowski
This should have been used in there from day 1, let us address that before introducing XDP multi-buffer support for Rx side. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-8-maciej.fijalkowski@intel.com
2023-02-01ice: Centrallize Rx buffer recyclingMaciej Fijalkowski
Currently calls to ice_put_rx_buf() are sprinkled through ice_clean_rx_irq() - first place is for explicit flow director's descriptor handling, second is after running XDP prog and the last one is after taking care of skb. 1st callsite was actually only for ntc bump purpose, as Rx buffer to be recycled is not even passed to a function. It is possible to walk through Rx buffers processed in particular NAPI cycle by caching ntc from beginning of the ice_clean_rx_irq(). To do so, let us store XDP verdict inside ice_rx_buf, so action we need to take on will be known. For XDP prog absence, just store ICE_XDP_PASS as a verdict. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-7-maciej.fijalkowski@intel.com
2023-02-01ice: Inline eop checkMaciej Fijalkowski
This might be in future used by ZC driver and might potentially yield a minor performance boost. While at it, constify arguments that ice_is_non_eop() takes, since they are pointers and this will help compiler while generating asm. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-6-maciej.fijalkowski@intel.com
2023-02-01ice: Pull out next_to_clean bump out of ice_put_rx_buf()Maciej Fijalkowski
Plan is to move ice_put_rx_buf() to the end of ice_clean_rx_irq() so in order to keep the ability of walking through HW Rx descriptors, pull out next_to_clean handling out of ice_put_rx_buf(). Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-5-maciej.fijalkowski@intel.com
2023-02-01ice: Store page count inside ice_rx_bufMaciej Fijalkowski
This will allow us to avoid carrying additional auxiliary array of page counts when dealing with XDP multi buffer support. Previously combining fragmented frame to skb was not affected in the same way as XDP would be as whole frame is needed to be in place before executing XDP prog. Therefore, when going through HW Rx descriptors one-by-one, calls to ice_put_rx_buf() need to be taken *after* running XDP prog on a potentially multi buffered frame, so some additional storage of page count is needed. By adding page count to rx buf, it will make it easier to walk through processed entries at the end of rx cleaning routine and decide whether or not buffers should be recycled. While at it, bump ice_rx_buf::pagecnt_bias from u16 up to u32. It was proven many times that calculations on variables smaller than standard register size are harmful. This was also the case during experiments with embedding page count to ice_rx_buf - when this was added as u16 it had a performance impact. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20230131204506.219292-4-maciej.fijalkowski@intel.com