Age | Commit message (Collapse) | Author |
|
There is no point in having seven architectures implementing the same empty
stub.
Provide a weak function in the init code and remove the stubs.
This also allows to utilize the function on UP which is required to
sanitize the per CPU handling on X86 UP.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.567671691@linutronix.de
|
|
Sparse rightfully complains about using a plain pointer for per CPU
accessors:
msr-smp.c:15:23: sparse: warning: incorrect type in initializer (different address spaces)
msr-smp.c:15:23: sparse: expected void const [noderef] __percpu *__vpp_verify
msr-smp.c:15:23: sparse: got struct msr *
Add __percpu annotations to the related datastructure and function
arguments to cure this. This also cures the related sparse warnings at the
callsites in drivers/edac/amd64_edac.c.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.513181735@linutronix.de
|
|
To clean up the per CPU insanity of UP which causes sparse to be rightfully
unhappy and prevents the usage of the generic per CPU accessors on cpu_info
it is necessary to include <linux/percpu.h> into <asm/msr.h>.
Including <linux/percpu.h> into <asm/msr.h> is impossible because it ends
up in header dependency hell. The problem is that <asm/processor.h>
includes <asm/msr.h>. The inclusion of <linux/percpu.h> results in a
compile fail where the compiler cannot longer handle an include in
<asm/cpufeature.h> which references boot_cpu_data which is
defined in <asm/processor.h>.
The only reason why <asm/msr.h> is included in <asm/processor.h> are the
set/get_debugctlmsr() inlines. They are defined there because <asm/processor.h>
is such a nice dump ground for everything. In fact they belong obviously
into <asm/debugreg.h>.
Move them to <asm/debugreg.h> and fix up the resulting damage which is just
exposing the reliance on random include chains.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.454678686@linutronix.de
|
|
The __percpu annotation in struct amd_uncore is confusing Sparse:
uncore.c:649:10: sparse: warning: incorrect type in initializer (different address spaces)
uncore.c:649:10: sparse: expected void const [noderef] __percpu *__vpp_verify
uncore.c:649:10: sparse: got union amd_uncore_info *
The reason is that the __percpu annotation sits between the '*'
dereferencing operator and the member name.
Move it before the dereferencing operator to cure this.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.394845326@linutronix.de
|
|
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Matthieu Baerts says:
====================
mptcp: add TCP_NOTSENT_LOWAT sockopt support
Patch 3 does the magic of adding TCP_NOTSENT_LOWAT support, all the
other ones are minor cleanup seen along when working on the new
feature.
Note that this feature relies on the existing accounting for snd_nxt.
Such accounting is not 110% accurate as it tracks the most recent
sequence number queued to any subflow, and not the actual sequence
number sent on the wire. Paolo experimented a lot, trying to implement
the latter, and in the end it proved to be both "too complex" and "not
necessary".
The complexity raises from the need for additional lock and a lot of
refactoring to introduce such protections without adding significant
overhead. Additionally, snd_nxt is currently used and exposed with the
current semantic by the internal packet scheduling. Introducing a
different tracking will still require us to keep the old one.
More interestingly, a more accurate tracking could be not strictly
necessary: as the MPTCP socket enqueues data to the subflows only up
to the available send window, any enqueue data is sent on the wire
instantly, without any blocking operation short or a drop in the tx
path at the nft or TC layer.
====================
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
|
|
Most TCP-level socket options get an integer from user space, and
set the corresponding field under the msk-level socket lock.
Reduce the code duplication moving such operations in the common code.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <martineau@kernel.org>
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add support for such socket option storing the user-space provided
value in a new msk field, and using such data to implement the
_mptcp_stream_memory_free() helper, similar to the TCP one.
To avoid adding more indirect calls in the fast path, open-code
a variant of sk_stream_memory_free() in mptcp_sendmsg() and add
direct calls to the mptcp stream memory free helper where possible.
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/464
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <martineau@kernel.org>
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The mptcp_get_int_option() helper is needless open-coded in a
couple of places, replace the duplicate code with the helper
call.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <martineau@kernel.org>
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
After commit 5cf92bbadc58 ("mptcp: re-enable sndbuf autotune"), the
MPTCP_NOSPACE bit is redundant: it is always set and cleared together with
SOCK_NOSPACE.
Let's drop the first and always relay on the latter, dropping a bunch
of useless code.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Mat Martineau <martineau@kernel.org>
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The following commit abf6569d6482 ("pwm: imx-tpm: Make use of
devm_pwmchip_alloc() function") introduced an issue that accessing
registers without clock which results in the following boot crash
on MX7ULP platform. Fixed it by enabling clock properly.
Unhandled fault: external abort on non-linefetch (0x1008) at 0xf0978004
[f0978004] *pgd=64009811, *pte=40250653, *ppte=40250453
Internal error: : 1008 [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.8.0-rc6-next-20240301 #18
Hardware name: Freescale i.MX7ULP (Device Tree)
PC is at pwm_imx_tpm_probe+0x1c/0xd8
LR is at __devm_ioremap_resource+0xf8/0x1dc
pc : [<c0629e58>] lr : [<c0562d4c>] psr: 80000053
sp : f0825e10 ip : 00000000 fp : 00000000
r10: c148f8c0 r9 : c41fc338 r8 : c164b000
r7 : 00000000 r6 : c406b400 r5 : c406b410 r4 : f0978000
r3 : 00000005 r2 : 00000000 r1 : a0000053 r0 : f0978000
Flags: Nzcv IRQs on FIQs off Mode SVC_32 ISA ARM Segment none
Control: 10c5387d Table: 6000406a DAC: 00000051
...
Call trace:
pwm_imx_tpm_probe from platform_probe+0x58/0xb0
platform_probe from really_probe+0xc4/0x2e0
really_probe from __driver_probe_device+0x84/0x19c
__driver_probe_device from driver_probe_device+0x2c/0x104
driver_probe_device from __driver_attach+0x90/0x170
__driver_attach from bus_for_each_dev+0x7c/0xd0
bus_for_each_dev from bus_add_driver+0xc4/0x1cc
bus_add_driver from driver_register+0x7c/0x114
driver_register from do_one_initcall+0x58/0x270
do_one_initcall from kernel_init_freeable+0x170/0x218
kernel_init_freeable from kernel_init+0x14/0x140
kernel_init from ret_from_fork+0x14/0x20
Fixes: abf6569d6482 ("pwm: imx-tpm: Make use of devm_pwmchip_alloc() function")
Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
Link: https://lore.kernel.org/r/20240304102929.893542-1-aisheng.dong@nxp.com
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
|
|
This empty wrapped exists only for !CONFIG_MEMCG_KMEM and seems it was
never used. Probably a leftover from development of a series.
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
|
|
Do not set rtnl_link_stats64 fields to zero, since they are zeroed
before ops->ndo_get_stats64 is called in core dev_get_stats() function.
Also, simplify the data collection by removing the temporary variable.
Signed-off-by: Breno Leitao <leitao@debian.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
With commit 34d21de99cea9 ("net: Move {l,t,d}stats allocation to core and
convert veth & vrf"), stats allocation could be done on net core
instead of this driver.
With this new approach, the driver doesn't have to bother with error
handling (allocation failure checking, making sure free happens in the
right spot, etc). This is core responsibility now.
Remove the allocation in the nlmon driver and leverage the network
core allocation.
Signed-off-by: Breno Leitao <leitao@debian.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The last patch which added support to extend the firmware shared
data to add channel data information has introduced a bug due to
the reserved space not adjusted accordingly.
This patch fixes the issue and also adds BUILD_BUG to avoid this
regression error.
Fixes: 997814491cee ("Octeontx2-af: Fetch MAC channel info from firmware")
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If message fills up we need to stop writing. 'break' will
only get us out of the iteration over pools of a single
netdev, we need to also stop walking netdevs.
This results in either infinite dump, or missing pools,
depending on whether message full happens on the last
netdev (infinite dump) or non-last (missing pools).
Fixes: 950ab53b77ab ("net: page_pool: implement GET in the netlink API")
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Switching from a separate list to flags introduced a bug here.
We were accidentially ORing the flag before initailizing the placement
and not after. So this code didn't do nothing except producing a
warning.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Fixes: a78a8da51b36 ("drm/ttm: replace busy placement with flags v6")
Link: https://patchwork.freedesktop.org/patch/msgid/20240226142759.93130-1-christian.koenig@amd.com
Tested-by: Stephen Rothwell <sfr@canb.auug.org.au> # compile only
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
|
|
struct net_device poll_dev in struct igc_q_vector was added
in one of the initial commits, but never used.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Ziwei Xiao says:
====================
gve: Add header split support
Currently, the ethtool's ringparam has added a new field
tcp-data-split for enabling and disabling header split. These three
patches will utilize that ethtool flag to support header split in GVE
driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To record the stats of header split packets, three stats are added in
the driver's ethtool stats.
- rx_hsplit_pkt is the split packets count with header split
- rx_hsplit_bytes is the received header bytes count with header split
- rx_hsplit_unsplit_pkt is the unsplit packet count due to header buffer
overflow or zero header length when header split is enabled
Currently, it's entering the stats_update critical section more than
once per packet. We have plans to avoid that in the future change to let
all the stats_update happen in one place at the end of
`gve_rx_poll_dqo`.
Co-developed-by: Ziwei Xiao <ziweixiao@google.com>
Signed-off-by: Ziwei Xiao <ziweixiao@google.com>
Signed-off-by: Jeroen de Borst <jeroendb@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add header buffers and ethtool support to enable header split via the
tcp-data-split flag in ethtool's ringparam config. A coherent dma memory
is allocated for the header buffers. There is one header buffer per ring
entry by calculating the offset to the header-buffers starting address.
The header buffer is always copied directly into the skb and payload is
always added as frags. When there is a header buffer overflow or the
header length is 0, the driver places the whole unsplit packet in frags.
When toggling header split, the driver will call gve_adjust_config to
set its queues appropriately. If header split is enabled by the user and
the max packet buffer size is no less than 4KB, driver will set the
packet buffer size as 4KB to support TCP_ZEROCOPY_RECEIVE. Otherwise the
driver will use the default 2KB as the packet buffer size.
`ethtool -G <dev> tcp-data-split on/off` is the command to toggle header
split.
`ethtool -g <dev>` will show the status of header split with the field
of `tcp-data-split`.
Co-developed-by: Ziwei Xiao <ziweixiao@google.com>
Signed-off-by: Ziwei Xiao <ziweixiao@google.com>
Signed-off-by: Jeroen de Borst <jeroendb@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To enable header split via ethtool, we first need to query the device to
get the max rx buffer size and header buffer size. Add a device option
to get these values and store them in the driver. If the header buffer
size received from the device is non-zero, it means header split is
supported in the device.
Currently the max rx buffer size will only be used when header split is
enabled which will set the data_buffer_size_dqo to be the max rx buffer
size. Also change the data_buffer_size_dqo from int to u16 since we are
modifying it and making it to be consistent with max_rx_buffer_size.
Co-developed-by: Ziwei Xiao <ziweixiao@google.com>
Signed-off-by: Ziwei Xiao <ziweixiao@google.com>
Signed-off-by: Jeroen de Borst <jeroendb@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
syzbot triggered a bug in geneve_rx() [1]
Issue is similar to the one I fixed in commit 8d975c15c0cd
("ip6_tunnel: make sure to pull inner header in __ip6_tnl_rcv()")
We have to save skb->network_header in a temporary variable
in order to be able to recompute the network_header pointer
after a pskb_inet_may_pull() call.
pskb_inet_may_pull() makes sure the needed headers are in skb->head.
[1]
BUG: KMSAN: uninit-value in IP_ECN_decapsulate include/net/inet_ecn.h:302 [inline]
BUG: KMSAN: uninit-value in geneve_rx drivers/net/geneve.c:279 [inline]
BUG: KMSAN: uninit-value in geneve_udp_encap_recv+0x36f9/0x3c10 drivers/net/geneve.c:391
IP_ECN_decapsulate include/net/inet_ecn.h:302 [inline]
geneve_rx drivers/net/geneve.c:279 [inline]
geneve_udp_encap_recv+0x36f9/0x3c10 drivers/net/geneve.c:391
udp_queue_rcv_one_skb+0x1d39/0x1f20 net/ipv4/udp.c:2108
udp_queue_rcv_skb+0x6ae/0x6e0 net/ipv4/udp.c:2186
udp_unicast_rcv_skb+0x184/0x4b0 net/ipv4/udp.c:2346
__udp4_lib_rcv+0x1c6b/0x3010 net/ipv4/udp.c:2422
udp_rcv+0x7d/0xa0 net/ipv4/udp.c:2604
ip_protocol_deliver_rcu+0x264/0x1300 net/ipv4/ip_input.c:205
ip_local_deliver_finish+0x2b8/0x440 net/ipv4/ip_input.c:233
NF_HOOK include/linux/netfilter.h:314 [inline]
ip_local_deliver+0x21f/0x490 net/ipv4/ip_input.c:254
dst_input include/net/dst.h:461 [inline]
ip_rcv_finish net/ipv4/ip_input.c:449 [inline]
NF_HOOK include/linux/netfilter.h:314 [inline]
ip_rcv+0x46f/0x760 net/ipv4/ip_input.c:569
__netif_receive_skb_one_core net/core/dev.c:5534 [inline]
__netif_receive_skb+0x1a6/0x5a0 net/core/dev.c:5648
process_backlog+0x480/0x8b0 net/core/dev.c:5976
__napi_poll+0xe3/0x980 net/core/dev.c:6576
napi_poll net/core/dev.c:6645 [inline]
net_rx_action+0x8b8/0x1870 net/core/dev.c:6778
__do_softirq+0x1b7/0x7c5 kernel/softirq.c:553
do_softirq+0x9a/0xf0 kernel/softirq.c:454
__local_bh_enable_ip+0x9b/0xa0 kernel/softirq.c:381
local_bh_enable include/linux/bottom_half.h:33 [inline]
rcu_read_unlock_bh include/linux/rcupdate.h:820 [inline]
__dev_queue_xmit+0x2768/0x51c0 net/core/dev.c:4378
dev_queue_xmit include/linux/netdevice.h:3171 [inline]
packet_xmit+0x9c/0x6b0 net/packet/af_packet.c:276
packet_snd net/packet/af_packet.c:3081 [inline]
packet_sendmsg+0x8aef/0x9f10 net/packet/af_packet.c:3113
sock_sendmsg_nosec net/socket.c:730 [inline]
__sock_sendmsg net/socket.c:745 [inline]
__sys_sendto+0x735/0xa10 net/socket.c:2191
__do_sys_sendto net/socket.c:2203 [inline]
__se_sys_sendto net/socket.c:2199 [inline]
__x64_sys_sendto+0x125/0x1c0 net/socket.c:2199
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcf/0x1e0 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b
Uninit was created at:
slab_post_alloc_hook mm/slub.c:3819 [inline]
slab_alloc_node mm/slub.c:3860 [inline]
kmem_cache_alloc_node+0x5cb/0xbc0 mm/slub.c:3903
kmalloc_reserve+0x13d/0x4a0 net/core/skbuff.c:560
__alloc_skb+0x352/0x790 net/core/skbuff.c:651
alloc_skb include/linux/skbuff.h:1296 [inline]
alloc_skb_with_frags+0xc8/0xbd0 net/core/skbuff.c:6394
sock_alloc_send_pskb+0xa80/0xbf0 net/core/sock.c:2783
packet_alloc_skb net/packet/af_packet.c:2930 [inline]
packet_snd net/packet/af_packet.c:3024 [inline]
packet_sendmsg+0x70c2/0x9f10 net/packet/af_packet.c:3113
sock_sendmsg_nosec net/socket.c:730 [inline]
__sock_sendmsg net/socket.c:745 [inline]
__sys_sendto+0x735/0xa10 net/socket.c:2191
__do_sys_sendto net/socket.c:2203 [inline]
__se_sys_sendto net/socket.c:2199 [inline]
__x64_sys_sendto+0x125/0x1c0 net/socket.c:2199
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xcf/0x1e0 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b
Fixes: 2d07dc79fe04 ("geneve: add initial netdev driver for GENEVE tunnels")
Reported-and-tested-by: syzbot+6a1423ff3f97159aae64@syzkaller.appspotmail.com
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
With commit 34d21de99cea9 ("net: Move {l,t,d}stats allocation to core and
convert veth & vrf"), stats allocation could be done on net core
instead of in this driver.
With this new approach, the driver doesn't have to bother with error
handling (allocation failure checking, making sure free happens in the
right spot, etc). This is core responsibility now.
Remove the allocation in the ip6_tunnel driver and leverage the network
core allocation instead.
Signed-off-by: Breno Leitao <leitao@debian.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In order to debug commit_done timeouts, capture the devcoredump state
when the first timeout occurs after the encoder has been enabled.
Reviewed-by: Abhinav Kumar <quic_abhinavk@quicinc.com>
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579850/
Link: https://lore.kernel.org/r/20240226-fd-dpu-debug-timeout-v4-3-51eec83dde23@linaro.org
|
|
Stop multiplexing several events via the dpu_encoder_wait_for_event()
function. Split it into two distinct functions two allow separate
handling of those events.
Reviewed-by: Abhinav Kumar <quic_abhinavk@quicinc.com>
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579848/
Link: https://lore.kernel.org/r/20240226-fd-dpu-debug-timeout-v4-2-51eec83dde23@linaro.org
|
|
We have several reports of vblank timeout messages. However after some
debugging it was found that there might be different causes to that.
To allow us to identify the DPU block that gets stuck, include the
actual CTL_FLUSH value into the timeout message.
Reviewed-by: Abhinav Kumar <quic_abhinavk@quicinc.com>
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579849/
Link: https://lore.kernel.org/r/20240226-fd-dpu-debug-timeout-v4-1-51eec83dde23@linaro.org
|
|
All the components of YUV420 over DP are added. Therefore, let's mark the
connector property as true for DP connector when the DP type is not eDP
and when there is a CDM block available.
Changes in v3:
- Move setting the connector's ycbcr_420_allowed parameter so
that it is not dependent on if the dp_display is not eDP
Changes in v2:
- Check for if dp_catalog has a CDM block available instead of
checking if VSC SDP is allowed when setting the dp connector's
ycbcr_420_allowed parameter
Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579628/
Link: https://lore.kernel.org/r/20240222194025.25329-20-quic_parellan@quicinc.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
|
|
Reserve CDM blocks for DP if the mode format is YUV420. Currently this
reservation only works for writeback and DP if the format is YUV420. But
this can be easily extented to other YUV formats for DP.
Changes in v2:
- Minor code simplification
Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579630/
Link: https://lore.kernel.org/r/20240222194025.25329-19-quic_parellan@quicinc.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
|
|
Adjust the encoder timing engine setup programming in the case of video
mode for YUV420 over DP to accommodate CDM.
Changes in v3:
- Move drm_display_mode's hskew division to another patch
- Minor cleanup
Changes in v2:
- Move timing engine programming to this patch
Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579634/
Link: https://lore.kernel.org/r/20240222194025.25329-18-quic_parellan@quicinc.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
|
|
Adjust the encoder format programming in the case of video mode for DP
to accommodate CDM related changes.
Changes in v4:
- Remove hw_cdm check in dpu_encoder_needs_periph_flush()
- Remove hw_cdm check when getting the fmt_fourcc in
dpu_encoder_phys_vid_enable()
Changes in v2:
- Move timing engine programming to a separate patch from this
one
- Move update_pending_flush_periph() invocation completely to
this patch
- Change the logic of dpu_encoder_get_drm_fmt() so that it only
calls drm_mode_is_420_only() instead of doing additional
unnecessary checks
- Create new functions msm_dp_needs_periph_flush() and it's
supporting function dpu_encoder_needs_periph_flush() to check
if the mode is YUV420 and VSC SDP is enabled before doing a
peripheral flush
Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579641/
Link: https://lore.kernel.org/r/20240222194025.25329-17-quic_parellan@quicinc.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
|
|
DP controller can be setup to operate in either SDP update flush mode or
peripheral flush mode based on the DP controller hardware version.
Starting in DP v1.2, the hardware documents require the use of
peripheral flush mode for SDP packets such as PPS OR VSC SDP packets.
In-line with this guidance, lets program the DP controller to use
peripheral flush mode starting DP v1.2
Changes in v4:
- Clear up that DP_MAINLINK_CTRL_FLUSH_MODE register requires
the use of bits [24:23]
- Modify macros DP_MAINLINK_FLUSH_MODE_UPDATE_SDP and
DP_MAINLINK_FLUSH_MODE_SDP_PERIPH_UPDATE to explicitly set
their values in the bits of DP_MAINLINK_CTRL_FLUSH_MODE_MASK
Changes in v3:
- Clear up that the DP_MAINLINK_FLUSH_MODE_SDE_PERIPH_UPDATE
macro is setting bits [24:23] to a value of 3
Changes in v2:
- Use the original dp_catalog_hw_revision() function to
correctly check the DP HW version
Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579621/
Link: https://lore.kernel.org/r/20240222194025.25329-16-quic_parellan@quicinc.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
|
|
Introduce a peripheral flushing mechanism to decouple peripheral
metadata flushing from timing engine related flush.
Changes in v2:
- Fixed some misalignment issues
Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579619/
Link: https://lore.kernel.org/r/20240222194025.25329-15-quic_parellan@quicinc.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
|
|
Add support to pack and send the VSC SDP packet for DP. This therefore
allows the transmision of format information to the sinks which is
needed for YUV420 support over DP.
Changes in v5:
- Slightly modify use of drm_dp_vsc_sdp_pack()
- Remove dp_catalog NULL checks
- Modify dp_utils_pack_sdp_header() to more clearly pack the
header buffer
- Move dp_utils_pack_sdp_header() inside of
dp_catalog_panel_send_vsc_sdp to clearly show the relationship
between the header buffer and the vsc_sdp struct
- Due to the last point, remove the dp_utils_pack_vsc_sdp()
function and only call drm_dp_vsc_sdp_pack() in
dp_panel_setup_vsc_sdp_yuv_420()
Changes in v4:
- Remove struct msm_dp_sdp_with_parity
- Use dp_utils_pack_sdp_header() to pack the SDP header and
parity bytes into a buffer
- Use this buffer when writing the VSC SDP data in
dp_catalog_panel_send_vsc_sdp()
- Write to all of the MMSS_DP_GENERIC0 registers instead of just
the ones with non-zero values
Changes in v3:
- Create a new struct, msm_dp_sdp_with_parity, which holds the
packing information for VSC SDP
- Use drm_dp_vsc_sdp_pack() to pack the data into the new
msm_dp_sdp_with_parity struct instead of specifically packing
for YUV420 format
- Modify dp_catalog_panel_send_vsc_sdp() to send the VSC SDP
data using the new msm_dp_sdp_with_parity struct
Changes in v2:
- Rename GENERIC0_SDPSIZE macro to GENERIC0_SDPSIZE_VALID
- Remove dp_sdp from the dp_catalog struct since this data is
being allocated at the point used
- Create a new function in dp_utils to pack the VSC SDP data
into a buffer
- Create a new function that packs the SDP header bytes into a
buffer. This function is made generic so that it can be
utilized by dp_audio
header bytes into a buffer
- Create a new function in dp_utils that takes the packed buffer
and writes to the DP_GENERIC0_* registers
- Split the dp_catalog_panel_config_vsc_sdp() function into two
to disable/enable sending VSC SDP packets
- Check the DP HW version using the original useage of
dp_catalog_hw_revision() and correct the version checking
logic
- Rename dp_panel_setup_vsc_sdp() to
dp_panel_setup_vsc_sdp_yuv_420() to explicitly state that
currently VSC SDP is only being set up to support YUV420 modes
Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579636/
Link: https://lore.kernel.org/r/20240222194025.25329-14-quic_parellan@quicinc.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
|
|
Shannon Nelson says:
====================
ionic: code cleanup and performance tuning
Brett has been performance testing and code tweaking and has
come up with several improvements for our fast path operations.
In a simple single thread / single queue iperf case on a 1500 MTU
connection we see an improvement from 74.2 to 86.7 Gbits/sec.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The MODULE_AUTHOR macro is supposed to be a person
not a company.
Reviewed-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Clean up complaints from an xmastree.py scan.
Signed-off-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use the kernel's CQE dim table to align better with the
driver's use of completion queues, and use the tx moderation
when using Tx interrupts.
Signed-off-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
An earlier change moved the hwstamp queue check into a helper
function with an unlikely(). However, it makes more sense for
the caller to decide if it's likely() or unlikely(), so make
the change to support that.
Signed-off-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
To help make sure we're only accessing things we really need
to access we can cut down on the q->lif->netdev references by
using q->dev which is already in cache.
Reviewed-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Instead of using q->lif->netdev, just pass the netdev when it's
locally defined.
Signed-off-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If there is a lot of transmit traffic the driver can get into a
situation that the device is starved due to the doorbell never
being rung. This can happen if xmit_more is set constantly
and __netdev_tx_sent_queue() keeps returning false. Fix this
by checking if the queue needs to be stopped right before
calling __netdev_tx_sent_queue(). Use MAX_SKB_FRAGS + 1 as the
stop condition because that's the maximum number of frags
supported for non-TSO transmit.
Signed-off-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The driver currently calls netdev_tx_completed_queue() for every
Tx completion. However, this API is only meant to be called once
per NAPI if any Tx work is done. Make the necessary changes to
support calling netdev_tx_completed_queue() only once per NAPI.
Also, use the __netdev_tx_sent_queue() API, which supports the
xmit_more functionality.
Signed-off-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Make use of napi_consume_skb so that skb recycling
can happen by way of the napi_skb_cache.
Signed-off-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Perf was showing some hot spots in ionic_tx_descs_needed()
for TSO traffic. Rework the function to return sooner where
possible.
Signed-off-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Cut down the number of default Tx and Rx descriptors to save
initial memory requirements.
Signed-off-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently the driver attempts to wake the Tx queue
for every descriptor processed. However, this is
overkill and can cause thrashing since Tx xmit can be
running concurrently on a different CPU than Tx clean.
Fix this by refactoring Tx cq servicing into its own
function so the Tx wake code can run after processing
all Tx descriptors.
The driver isn't using the expected memory barriers
to make sure the stop/start bits are coherent. Fix
this by making sure to use the correct memory barriers.
Also, the driver is using the wake API during Tx
xmit even though it's already scheduled. Fix this by
using the start API during Tx xmit.
Signed-off-by: Brett Creeley <brett.creeley@amd.com>
Signed-off-by: Shannon Nelson <shannon.nelson@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
I'm updating __assign_str() and will be removing the second parameter. To
make sure that it does not break anything, I make sure that it matches the
__string() field, as that is where the string is actually going to be
saved in. To make sure there's nothing that breaks, I added a WARN_ON() to
make sure that what was used in __string() is the same that is used in
__assign_str().
In doing this change, an error was triggered as __assign_str() now expects
the string passed in to be a char * value. I instead had the following
warning:
include/trace/events/qdisc.h: In function ‘trace_event_raw_event_qdisc_reset’:
include/trace/events/qdisc.h:91:35: error: passing argument 1 of 'strcmp' from incompatible pointer type [-Werror=incompatible-pointer-types]
91 | __assign_str(dev, qdisc_dev(q));
That's because the qdisc_enqueue() and qdisc_reset() pass in qdisc_dev(q)
to __assign_str() and to __string(). But that function returns a pointer
to struct net_device and not a string.
It appears that these events are just saving the pointer as a string and
then reading it as a string as well.
Use qdisc_dev(q)->name to save the device instead.
Fixes: a34dac0b90552 ("net_sched: add tracepoints for qdisc_reset() and qdisc_destroy()")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Parity calculation is necessary for VSC SDP implementation. Therefore
create new files dp_utils.c and dp_utils.h and move the parity
calculating functions here. This ensures that they are usable by SDP
programming in both dp_catalog.c and dp_audio.c
Changes in v3:
- Change ordering of the header byte macros
Changes in v2:
- Create new files dp_utils.c and dp_utils.h
- Move the parity calculation to these new files instead of
having them in dp_catalog.c and dp_catalog.h
Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579617/
Link: https://lore.kernel.org/r/20240222194025.25329-13-quic_parellan@quicinc.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
|
|
Change all relevant DP controller related programming for YUV420 cases.
Namely, change the pixel clock math to consider YUV420 and modify the
MVID programming to consider YUV420.
Changes in v2:
- Move configuration control programming to a different commit
- Slight code simplification
- Add VSC SDP check when doing mode_pclk_khz division in
dp_bridge_mode_valid
Signed-off-by: Paloma Arellano <quic_parellan@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/579640/
Link: https://lore.kernel.org/r/20240222194025.25329-12-quic_parellan@quicinc.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
|