summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-08-14drm/panel: JDI LT070ME05000 simplify with dev_err_probe()David Heidelberg
Use the dev_err_probe() helper to simplify error handling during probe. This also handle scenario, when EDEFER is returned and useless error is printed. Fixes error: panel-jdi-lt070me05000 4700000.dsi.0: cannot get enable-gpio -517 Signed-off-by: David Heidelberg <david@ixit.cz> Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org> Signed-off-by: Neil Armstrong <neil.armstrong@linaro.org> Link: https://patchwork.freedesktop.org/patch/msgid/20230812185239.378582-1-david@ixit.cz
2023-08-14spi: amd: fix Wvoid-pointer-to-enum-cast warningKrzysztof Kozlowski
'version' is an enum, thus cast of pointer on 64-bit compile test with W=1 causes: spi-amd.c:401:21: error: cast to smaller integer type 'enum amd_spi_versions' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast] Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20230810091247.70149-3-krzysztof.kozlowski@linaro.org Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: pxa2xx: fix Wvoid-pointer-to-enum-cast warningKrzysztof Kozlowski
'type' is an enum, thus cast of pointer on 64-bit compile test with W=1 causes: spi-pxa2xx.c:1347:10: error: cast to smaller integer type 'enum pxa_ssp_type' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast] Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20230810091247.70149-2-krzysztof.kozlowski@linaro.org Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: sc18is602: fix Wvoid-pointer-to-enum-cast warningKrzysztof Kozlowski
'id' is an enum, thus cast of pointer on 64-bit compile test with W=1 causes: spi-sc18is602.c:269:12: error: cast to smaller integer type 'enum chips' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast] Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20230810091247.70149-1-krzysztof.kozlowski@linaro.org Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: lantiq: switch to use modern nameYang Yingliang
Change legacy name master/slave to modern name host/target or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-21-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: jcore: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-20-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: intel: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-19-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: ingenic: switch to use devm_spi_alloc_host()Yang Yingliang
Switch to use modern name function devm_spi_alloc_host(). No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-18-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: imx: switch to use modern nameYang Yingliang
Change legacy name master/slave to modern name host/target. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-17-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: img-spfi: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-16-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: hisi-sfc-v3xx: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-15-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: bcmbca-hsspi: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-14-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: gxp: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-13-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: gpio: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-12-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: fsl-spi: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-11-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: fsl-qspi: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-10-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: fsl-lpspi: switch to use modern nameYang Yingliang
Change legacy name master/target to modern name host/target No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-9-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: fsl-espi: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-8-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: fsl-dspi: switch to use modern nameYang Yingliang
Change legacy name master/target to modern name host/target or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-7-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: fsi: switch to use spi_alloc_host()Yang Yingliang
Switch to use modern name function spi_alloc_host(). No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-6-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: falcon: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-5-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: ep93xx: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-4-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: au1550: switch to use modern nameYang Yingliang
Change legacy name master to modern name host or controller. No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-3-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14spi: amlogic-spifc-a1: switch to use devm_spi_alloc_host()Yang Yingliang
Switch to use modern name function devm_spi_alloc_host(). No functional changed. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20230807124105.3429709-2-yangyingliang@huawei.com Signed-off-by: Mark Brown <broonie@kernel.org>
2023-08-14ARM: dts: am335x-bone-common: Add vcc-supply for on-board eepromShengyu Qu
The on-board eeprom on beaglebone series has a power supply from VDD_3V3A, add that to dts to reduce dummy regulator warning. Signed-off-by: Shengyu Qu <wiagn233@outlook.com> Message-ID: <TY3P286MB2611CDC84604B11570B4A8D2980FA@TY3P286MB2611.JPNP286.PROD.OUTLOOK.COM> Signed-off-by: Tony Lindgren <tony@atomide.com>
2023-08-14ARM: dts: am335x-bone-common: Add GPIO PHY reset on revision C3 boardShengyu Qu
This patch adds ethernet PHY reset GPIO config for Beaglebone Black series boards with revision C3. This fixes a random phy startup failure bug discussed at [1]. The GPIO pin used for reset is not used on older revisions, so it is ok to apply to all board revisions. The reset timing was discussed and tested at [2]. [1] https://forum.digikey.com/t/ethernet-device-is-not-detecting-on-ubuntu-20-04-lts-on-bbg/19948 [2] https://forum.beagleboard.org/t/recognizing-a-beaglebone-black-rev-c3-board/31249/ Signed-off-by: Robert Nelson <robertcnelson@gmail.com> Signed-off-by: Shengyu Qu <wiagn233@outlook.com> Message-ID: <TY3P286MB26113797A3B2EC7E0348BBB2980FA@TY3P286MB2611.JPNP286.PROD.OUTLOOK.COM> Signed-off-by: Tony Lindgren <tony@atomide.com>
2023-08-14selftests: mirror_gre_changes: Tighten up the TTL test matchPetr Machata
This test verifies whether the encapsulated packets have the correct configured TTL. It does so by sending ICMP packets through the test topology and mirroring them to a gretap netdevice. On a busy host however, more than just the test ICMP packets may end up flowing through the topology, get mirrored, and counted. This leads to potential spurious failures as the test observes much more mirrored packets than the sent test packets, and assumes a bug. Fix this by tightening up the mirror action match. Change it from matchall to a flower classifier matching on ICMP packets specifically. Fixes: 45315673e0c5 ("selftests: forwarding: Test changes in mirror-to-gretap") Signed-off-by: Petr Machata <petrm@nvidia.com> Tested-by: Mirsad Todorovac <mirsad.todorovac@alu.unizg.hr> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14x86/retpoline,kprobes: Skip optprobe check for indirect jumps with ↵Petr Pavlu
retpolines and IBT The kprobes optimization check can_optimize() calls insn_is_indirect_jump() to detect indirect jump instructions in a target function. If any is found, creating an optprobe is disallowed in the function because the jump could be from a jump table and could potentially land in the middle of the target optprobe. With retpolines, insn_is_indirect_jump() additionally looks for calls to indirect thunks which the compiler potentially used to replace original jumps. This extra check is however unnecessary because jump tables are disabled when the kernel is built with retpolines. The same is currently the case with IBT. Based on this observation, remove the logic to look for calls to indirect thunks and skip the check for indirect jumps altogether if the kernel is built with retpolines or IBT. Remove subsequently the symbols __indirect_thunk_start and __indirect_thunk_end which are no longer needed. Dropping this logic indirectly fixes a problem where the range [__indirect_thunk_start, __indirect_thunk_end] wrongly included also the return thunk. It caused that machines which used the return thunk as a mitigation and didn't have it patched by any alternative ended up not being able to use optprobes in any regular function. Fixes: 0b53c374b9ef ("x86/retpoline: Use -mfunction-return") Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Suggested-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20230711091952.27944-3-petr.pavlu@suse.com
2023-08-14x86/retpoline,kprobes: Fix position of thunk sections with CONFIG_LTO_CLANGPetr Pavlu
The linker script arch/x86/kernel/vmlinux.lds.S matches the thunk sections ".text.__x86.*" from arch/x86/lib/retpoline.S as follows: .text { [...] TEXT_TEXT [...] __indirect_thunk_start = .; *(.text.__x86.*) __indirect_thunk_end = .; [...] } Macro TEXT_TEXT references TEXT_MAIN which normally expands to only ".text". However, with CONFIG_LTO_CLANG, TEXT_MAIN becomes ".text .text.[0-9a-zA-Z_]*" which wrongly matches also the thunk sections. The output layout is then different than expected. For instance, the currently defined range [__indirect_thunk_start, __indirect_thunk_end] becomes empty. Prevent the problem by using ".." as the first separator, for example, ".text..__x86.indirect_thunk". This pattern is utilized by other explicit section names which start with one of the standard prefixes, such as ".text" or ".data", and that need to be individually selected in the linker script. [ nathan: Fix conflicts with SRSO and fold in fix issue brought up by Andrew Cooper in post-review: https://lore.kernel.org/20230803230323.1478869-1-andrew.cooper3@citrix.com ] Fixes: dc5723b02e52 ("kbuild: add support for Clang LTO") Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230711091952.27944-2-petr.pavlu@suse.com
2023-08-14x86/srso: Disable the mitigation on unaffected configurationsBorislav Petkov (AMD)
Skip the srso cmd line parsing which is not needed on Zen1/2 with SMT disabled and with the proper microcode applied (latter should be the case anyway) as those are not affected. Fixes: 5a15d8348881 ("x86/srso: Tie SBPB bit setting to microcode patch detection") Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230813104517.3346-1-bp@alien8.de
2023-08-14x86/CPU/AMD: Fix the DIV(0) initial fix attemptBorislav Petkov (AMD)
Initially, it was thought that doing an innocuous division in the #DE handler would take care to prevent any leaking of old data from the divider but by the time the fault is raised, the speculation has already advanced too far and such data could already have been used by younger operations. Therefore, do the innocuous division on every exit to userspace so that userspace doesn't see any potentially old data from integer divisions in kernel space. Do the same before VMRUN too, to protect host data from leaking into the guest too. Fixes: 77245f1c3c64 ("x86/CPU/AMD: Do not leak quotient data after a division by 0") Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/20230811213824.10025-1-bp@alien8.de
2023-08-14x86/retpoline: Don't clobber RFLAGS during srso_safe_ret()Sean Christopherson
Use LEA instead of ADD when adjusting %rsp in srso_safe_ret{,_alias}() so as to avoid clobbering flags. Drop one of the INT3 instructions to account for the LEA consuming one more byte than the ADD. KVM's emulator makes indirect calls into a jump table of sorts, where the destination of each call is a small blob of code that performs fast emulation by executing the target instruction with fixed operands. E.g. to emulate ADC, fastop() invokes adcb_al_dl(): adcb_al_dl: <+0>: adc %dl,%al <+2>: jmp <__x86_return_thunk> A major motivation for doing fast emulation is to leverage the CPU to handle consumption and manipulation of arithmetic flags, i.e. RFLAGS is both an input and output to the target of the call. fastop() collects the RFLAGS result by pushing RFLAGS onto the stack and popping them back into a variable (held in %rdi in this case): asm("push %[flags]; popf; " CALL_NOSPEC " ; pushf; pop %[flags]\n" <+71>: mov 0xc0(%r8),%rdx <+78>: mov 0x100(%r8),%rcx <+85>: push %rdi <+86>: popf <+87>: call *%rsi <+89>: nop <+90>: nop <+91>: nop <+92>: pushf <+93>: pop %rdi and then propagating the arithmetic flags into the vCPU's emulator state: ctxt->eflags = (ctxt->eflags & ~EFLAGS_MASK) | (flags & EFLAGS_MASK); <+64>: and $0xfffffffffffff72a,%r9 <+94>: and $0x8d5,%edi <+109>: or %rdi,%r9 <+122>: mov %r9,0x10(%r8) The failures can be most easily reproduced by running the "emulator" test in KVM-Unit-Tests. If you're feeling a bit of deja vu, see commit b63f20a778c8 ("x86/retpoline: Don't clobber RFLAGS during CALL_NOSPEC on i386"). In addition, this breaks booting of clang-compiled guest on a gcc-compiled host where the host contains the %rsp-modifying SRSO mitigations. [ bp: Massage commit message, extend, remove addresses. ] Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Closes: https://lore.kernel.org/all/de474347-122d-54cd-eabf-9dcc95ab9eae@amd.com Reported-by: Srikanth Aithal <sraithal@amd.com> Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Nathan Chancellor <nathan@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/20230810013334.GA5354@dev-arch.thelio-3990X/ Link: https://lore.kernel.org/r/20230811155255.250835-1-seanjc@google.com
2023-08-14Merge back system-wide sleep material for v6.6.Rafael J. Wysocki
2023-08-14parisc: Fix CONFIG_TLB_PTLOCK to work with lightweight spinlock checksHelge Deller
For the TLB_PTLOCK checks we used an optimization to store the spc register into the spinlock to unlock it. This optimization works as long as the lightweight spinlock checks (CONFIG_LIGHTWEIGHT_SPINLOCK_CHECK) aren't enabled, because they really check if the lock word is zero or __ARCH_SPIN_LOCK_UNLOCKED_VAL and abort with a kernel crash ("Spinlock was trashed") otherwise. Drop that optimization to make it possible to activate both checks at the same time. Noticed-by: Sam James <sam@gentoo.org> Signed-off-by: Helge Deller <deller@gmx.de> Tested-by: Sam James <sam@gentoo.org> Cc: stable@vger.kernel.org # v6.4+ Fixes: 15e64ef6520e ("parisc: Add lightweight spinlock checks")
2023-08-14Revert "vlan: Fix VLAN 0 memory leak"Vlad Buslov
This reverts commit 718cb09aaa6fa78cc8124e9517efbc6c92665384. The commit triggers multiple syzbot issues, probably due to possibility of manually creating VLAN 0 on netdevice which will cause the code to delete it since it can't distinguish such VLAN from implicit VLAN 0 automatically created for devices with NETIF_F_HW_VLAN_CTAG_FILTER feature. Reported-by: syzbot+662f783a5cdf3add2719@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/00000000000090196d0602a6167d@google.com/ Reported-by: syzbot+4b4f06495414e92701d5@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/00000000000096ae870602a61602@google.com/ Reported-by: syzbot+d810d3cd45ed1848c3f7@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/0000000000009f0f9c0602a616ce@google.com/ Fixes: 718cb09aaa6f ("vlan: Fix VLAN 0 memory leak") Signed-off-by: Vlad Buslov <vladbu@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14net: phy: Introduce PSGMII PHY interface modeGabor Juhos
The PSGMII interface is similar to QSGMII. The main difference is that the PSGMII interface combines five SGMII lines into a single link while in QSGMII only four lines are combined. Similarly to the QSGMII, this interface mode might also needs special handling within the MAC driver. It is commonly used by Qualcomm with their QCA807x PHY series and modern WiSoC-s. Add definitions for the PHY layer to allow to express this type of connection between the MAC and PHY. Signed-off-by: Gabor Juhos <j4g8y7@gmail.com> Signed-off-by: Robert Marko <robert.marko@sartura.hr> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14dt-bindings: net: ethernet-controller: add PSGMII modeRobert Marko
Add a new PSGMII mode which is similar to QSGMII with the difference being that it combines 5 SGMII lines into a single link compared to 4 on QSGMII. It is commonly used by Qualcomm on their QCA807x PHY series. Signed-off-by: Robert Marko <robert.marko@sartura.hr> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14Merge branch 'mlxsw-redirection'David S. Miller
Petr Machata says: ==================== mlxsw: Support traffic redirection from a locked bridge port Ido Schimmel writes: It is possible to add a filter that redirects traffic from the ingress of a bridge port that is locked (i.e., performs security / SMAC lookup) and has learning enabled. For example: # ip link add name br0 type bridge # ip link set dev swp1 master br0 # bridge link set dev swp1 learning on locked on mab on # tc qdisc add dev swp1 clsact # tc filter add dev swp1 ingress pref 1 proto ip flower skip_sw src_ip 192.0.2.1 action mirred egress redirect dev swp2 In the kernel's Rx path, this filter is evaluated before the Rx handler of the bridge, which means that redirected traffic should not be affected by bridge port configuration such as learning. However, the hardware data path is a bit different and the redirect action (FORWARDING_ACTION in hardware) merely attaches a pointer to the packet, which is later used by the L2 lookup stage to understand how to forward the packet. Between both stages - ingress ACL and L2 lookup - learning and security lookup are performed, which means that redirected traffic is affected by bridge port configuration, unlike in the kernel's data path. The learning discrepancy was handled in commit 577fa14d2100 ("mlxsw: spectrum: Do not process learned records with a dummy FID") by simply ignoring learning notifications generated by the redirected traffic. A similar solution is not possible for the security / SMAC lookup since - unlike learning - the CPU is not involved and packets that failed the lookup are dropped by the device. Instead, solve this by prepending the ignore action to the redirect action and use it to instruct the device to disable both learning and the security / SMAC lookup for redirected traffic. Patch #1 adds the ignore action. Patch #2 prepends the action to the redirect action in flower offload code. Patch #3 removes the workaround in commit 577fa14d2100 ("mlxsw: spectrum: Do not process learned records with a dummy FID") since it is no longer needed. Patch #4 adds a test case. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14selftests: forwarding: Add test case for traffic redirection from a locked portIdo Schimmel
Check that traffic can be redirected from a locked bridge port and that it does not create locked FDB entries. Cc: Hans J. Schultz <netdev@kapio-technology.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14mlxsw: spectrum: Stop ignoring learning notifications from redirected trafficIdo Schimmel
As explained in the previous patch, with the ignore action prepended to the redirect action, it is not longer possible for redirected traffic to generate learning notifications. Therefore, remove the workaround that was added in commit 577fa14d2100 ("mlxsw: spectrum: Do not process learned records with a dummy FID") as it is no longer needed. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14mlxsw: spectrum_flower: Disable learning and security lookup when redirectingIdo Schimmel
It is possible to add a filter that redirects traffic from the ingress of a bridge port that is locked (i.e., performs security / SMAC lookup) and has learning enabled. For example: # ip link add name br0 type bridge # ip link set dev swp1 master br0 # bridge link set dev swp1 learning on locked on mab on # tc qdisc add dev swp1 clsact # tc filter add dev swp1 ingress pref 1 proto ip flower skip_sw src_ip 192.0.2.1 action mirred egress redirect dev swp2 In the kernel's Rx path, this filter is evaluated before the Rx handler of the bridge, which means that redirected traffic should not be affected by bridge port configuration such as learning. However, the hardware data path is a bit different and the redirect action (FORWARDING_ACTION in hardware) merely attaches a pointer to the packet, which is later used by the L2 lookup stage to understand how to forward the packet. Between both stages - ingress ACL and L2 lookup - learning and security lookup are performed, which means that redirected traffic is affected by bridge port configuration, unlike in the kernel's data path. The learning discrepancy was handled in commit 577fa14d2100 ("mlxsw: spectrum: Do not process learned records with a dummy FID") by simply ignoring learning notifications generated by the redirected traffic. A similar solution is not possible for the security / SMAC lookup since - unlike learning - the CPU is not involved and packets that failed the lookup are dropped by the device. Instead, solve this by prepending the ignore action to the redirect action and use it to instruct the device to disable both learning and the security / SMAC lookup for redirected traffic. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14mlxsw: core_acl_flex_actions: Add IGNORE_ACTIONIdo Schimmel
Add the IGNORE_ACTION which is used to ignore basic switching functions such as learning on a per-packet basis. The action will be prepended to the FORWARDING_ACTION in subsequent patches. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14net: stmmac: xgmac: show more MAC HW features in debugfsFurong Xu
1. Show TSSTSSEL(Timestamp System Time Source), ADDMACADRSEL(additional MAC addresses), SMASEL(SMA/MDIO Interface), HDSEL(Half-duplex Support) in debugfs. 2. Show exact number of additional MAC address registers for XGMAC2 core. 3. XGMAC2 core does not have different IP checksum offload types, so just show rx_coe instead of rx_coe_type1 or rx_coe_type2. 4. XGMAC2 core does not have rxfifo_over_2048 definition, skip it. Signed-off-by: Furong Xu <0x1207@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14Merge branch 'net-stats-helpers'David S. Miller
Li Zetao says: ==================== Use helper functions to update stats The patch set uses the helper functions dev_sw_netstats_rx_add() and dev_sw_netstats_tx_add() to update stats, which is the same as implementing the function separately. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14vxlan: Use helper functions to update statsLi Zetao
Use the helper functions dev_sw_netstats_rx_add() and dev_sw_netstats_tx_add() to update stats, which helps to provide code readability. Signed-off-by: Li Zetao <lizetao1@huawei.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14net: macsec: Use helper functions to update statsLi Zetao
Use the helper functions dev_sw_netstats_rx_add() and dev_sw_netstats_tx_add() to update stats, which helps to provide code readability. Signed-off-by: Li Zetao <lizetao1@huawei.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14vmxnet3: Add XDP support.William Tu
The patch adds native-mode XDP support: XDP DROP, PASS, TX, and REDIRECT. Background: The vmxnet3 rx consists of three rings: ring0, ring1, and dataring. For r0 and r1, buffers at r0 are allocated using alloc_skb APIs and dma mapped to the ring's descriptor. If LRO is enabled and packet size larger than 3K, VMXNET3_MAX_SKB_BUF_SIZE, then r1 is used to mapped the rest of the buffer larger than VMXNET3_MAX_SKB_BUF_SIZE. Each buffer in r1 is allocated using alloc_page. So for LRO packets, the payload will be in one buffer from r0 and multiple from r1, for non-LRO packets, only one descriptor in r0 is used for packet size less than 3k. When receiving a packet, the first descriptor will have the sop (start of packet) bit set, and the last descriptor will have the eop (end of packet) bit set. Non-LRO packets will have only one descriptor with both sop and eop set. Other than r0 and r1, vmxnet3 dataring is specifically designed for handling packets with small size, usually 128 bytes, defined in VMXNET3_DEF_RXDATA_DESC_SIZE, by simply copying the packet from the backend driver in ESXi to the ring's memory region at front-end vmxnet3 driver, in order to avoid memory mapping/unmapping overhead. In summary, packet size: A. < 128B: use dataring B. 128B - 3K: use ring0 (VMXNET3_RX_BUF_SKB) C. > 3K: use ring0 and ring1 (VMXNET3_RX_BUF_SKB + VMXNET3_RX_BUF_PAGE) As a result, the patch adds XDP support for packets using dataring and r0 (case A and B), not the large packet size when LRO is enabled. XDP Implementation: When user loads and XDP prog, vmxnet3 driver checks configurations, such as mtu, lro, and re-allocate the rx buffer size for reserving the extra headroom, XDP_PACKET_HEADROOM, for XDP frame. The XDP prog will then be associated with every rx queue of the device. Note that when using dataring for small packet size, vmxnet3 (front-end driver) doesn't control the buffer allocation, as a result we allocate a new page and copy packet from the dataring to XDP frame. The receive side of XDP is implemented for case A and B, by invoking the bpf program at vmxnet3_rq_rx_complete and handle its returned action. The vmxnet3_process_xdp(), vmxnet3_process_xdp_small() function handles the ring0 and dataring case separately, and decides the next journey of the packet afterward. For TX, vmxnet3 has split header design. Outgoing packets are parsed first and protocol headers (L2/L3/L4) are copied to the backend. The rest of the payload are dma mapped. Since XDP_TX does not parse the packet protocol, the entire XDP frame is dma mapped for transmission and transmitted in a batch. Later on, the frame is freed and recycled back to the memory pool. Performance: Tested using two VMs inside one ESXi vSphere 7.0 machine, using single core on each vmxnet3 device, sender using DPDK testpmd tx-mode attached to vmxnet3 device, sending 64B or 512B UDP packet. VM1 txgen: $ dpdk-testpmd -l 0-3 -n 1 -- -i --nb-cores=3 \ --forward-mode=txonly --eth-peer=0,<mac addr of vm2> option: add "--txonly-multi-flow" option: use --txpkts=512 or 64 byte VM2 running XDP: $ ./samples/bpf/xdp_rxq_info -d ens160 -a <options> --skb-mode $ ./samples/bpf/xdp_rxq_info -d ens160 -a <options> options: XDP_DROP, XDP_PASS, XDP_TX To test REDIRECT to cpu 0, use $ ./samples/bpf/xdp_redirect_cpu -d ens160 -c 0 -e drop Single core performance comparison with skb-mode. 64B: skb-mode -> native-mode XDP_DROP: 1.6Mpps -> 2.4Mpps XDP_PASS: 338Kpps -> 367Kpps XDP_TX: 1.1Mpps -> 2.3Mpps REDIRECT-drop: 1.3Mpps -> 2.3Mpps 512B: skb-mode -> native-mode XDP_DROP: 863Kpps -> 1.3Mpps XDP_PASS: 275Kpps -> 376Kpps XDP_TX: 554Kpps -> 1.2Mpps REDIRECT-drop: 659Kpps -> 1.2Mpps Demo: https://youtu.be/4lm1CSCi78Q Future work: - XDP frag support - use napi_consume_skb() instead of dev_kfree_skb_any at unmap - stats using u64_stats_t - using bitfield macro BIT() - optimization for DMA synchronization using actual frame length, instead of always max_len Signed-off-by: William Tu <u9012063@gmail.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14Merge branch 'ovs-drop-reasons'David S. Miller
Adrian Moreno says: ==================== openvswitch: add drop reasons There is currently a gap in drop visibility in the openvswitch module. This series tries to improve this by adding a new drop reason subsystem for OVS. Apart from adding a new drop reasson subsystem and some common drop reasons, this series takes Eric's preliminary work [1] on adding an explicit drop action and integrates it into the same subsystem. A limitation of this series is that it does not report upcall errors. The reason is that there could be many sources of upcall drops and the most common one, which is the netlink buffer overflow, cannot be reported via kfree_skb() because the skb is freed in the netlink layer (see [2]). Therefore, using a reason for the rare events and not the common one would be even more misleading. I'd propose we add (in a follow up patch) a tracepoint to better report upcall errors. [1] https://lore.kernel.org/netdev/202306300609.tdRdZscy-lkp@intel.com/T/ [2] commit 1100248a5c5c ("openvswitch: Fix double reporting of drops in dropwatch") --- v4 -> v5: - Rebased - Added a helper function to explicitly convert drop reason enum types v3 -> v4: - Changed names of errors following Ilya's suggestions - Moved the ovs-dpctl.py changes from patch 7/7 to 3/7 - Added a test to ensure actions following a drop are rejected rfc2 -> v3: - Rebased on top of latest net-next rfc1 -> rfc2: - Fail when an explicit drop is not the last - Added a drop reason for action errors - Added braces around macros - Dropped patch that added support for masks in ovs-dpctl.py as it's now included in Aaron's series [2]. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14selftests: openvswitch: add explicit drop testcaseAdrian Moreno
Test explicit drops generate the right drop reason. Also, verify that the kernel rejects flows with actions following an explicit drop. Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Adrian Moreno <amorenoz@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-14selftests: openvswitch: add drop reason testcaseAdrian Moreno
Test if the correct drop reason is reported when OVS drops a packet due to an explicit flow. Acked-by: Aaron Conole <aconole@redhat.com> Signed-off-by: Adrian Moreno <amorenoz@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>