Age | Commit message (Collapse) | Author |
|
atomic_t based reference counting, including refcount_t, uses
atomic_inc_not_zero() for acquiring a reference. atomic_inc_not_zero() is
implemented with a atomic_try_cmpxchg() loop. High contention of the
reference count leads to retry loops and scales badly. There is nothing to
improve on this implementation as the semantics have to be preserved.
Provide rcuref as a scalable alternative solution which is suitable for RCU
managed objects. Similar to refcount_t it comes with overflow and underflow
detection and mitigation.
rcuref treats the underlying atomic_t as an unsigned integer and partitions
this space into zones:
0x00000000 - 0x7FFFFFFF valid zone (1 .. (INT_MAX + 1) references)
0x80000000 - 0xBFFFFFFF saturation zone
0xC0000000 - 0xFFFFFFFE dead zone
0xFFFFFFFF no reference
rcuref_get() unconditionally increments the reference count with
atomic_add_negative_relaxed(). rcuref_put() unconditionally decrements the
reference count with atomic_add_negative_release().
This unconditional increment avoids the inc_not_zero() problem, but
requires a more complex implementation on the put() side when the count
drops from 0 to -1.
When this transition is detected then it is attempted to mark the reference
count dead, by setting it to the midpoint of the dead zone with a single
atomic_cmpxchg_release() operation. This operation can fail due to a
concurrent rcuref_get() elevating the reference count from -1 to 0 again.
If the unconditional increment in rcuref_get() hits a reference count which
is marked dead (or saturated) it will detect it after the fact and bring
back the reference count to the midpoint of the respective zone. The zones
provide enough tolerance which makes it practically impossible to escape
from a zone.
The racy implementation of rcuref_put() requires to protect rcuref_put()
against a grace period ending in order to prevent a subtle use after
free. As RCU is the only mechanism which allows to protect against that, it
is not possible to fully replace the atomic_inc_not_zero() based
implementation of refcount_t with this scheme.
The final drop is slightly more expensive than the atomic_dec_return()
counterpart, but that's not the case which this is optimized for. The
optimization is on the high frequeunt get()/put() pairs and their
scalability.
The performance of an uncontended rcuref_get()/put() pair where the put()
is not dropping the last reference is still on par with the plain atomic
operations, while at the same time providing overflow and underflow
detection and mitigation.
The performance of rcuref compared to plain atomic_inc_not_zero() and
atomic_dec_return() based reference counting under contention:
- Micro benchmark: All CPUs running a increment/decrement loop on an
elevated reference count, which means the 0 to -1 transition never
happens.
The performance gain depends on microarchitecture and the number of
CPUs and has been observed in the range of 1.3X to 4.7X
- Conversion of dst_entry::__refcnt to rcuref and testing with the
localhost memtier/memcached benchmark. That benchmark shows the
reference count contention prominently.
The performance gain depends on microarchitecture and the number of
CPUs and has been observed in the range of 1.1X to 2.6X over the
previous fix for the false sharing issue vs. struct
dst_entry::__refcnt.
When memtier is run over a real 1Gb network connection, there is a
small gain on top of the false sharing fix. The two changes combined
result in a 2%-5% total gain for that networked test.
Reported-by: Wangyang Guo <wangyang.guo@intel.com>
Reported-by: Arjan Van De Ven <arjan.van.de.ven@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20230323102800.158429195@linutronix.de
|
|
atomic_add_negative() does not provide the relaxed/acquire/release
variants.
Provide them in preparation for a new scalable reference count algorithm.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20230323102800.101763813@linutronix.de
|
|
With ISO 15765-2:2016 the PDU size is not limited to 2^12 - 1 (4095)
bytes but can be represented as a 32 bit unsigned integer value which
allows 2^32 - 1 bytes (~4GB). The use-cases like automotive unified
diagnostic services (UDS) and flashing of ECUs still use the small
static buffers which are provided at socket creation time.
When a use-case requires to transfer PDUs up to 1025 kByte the maximum
PDU size can now be extended by setting the module parameter
max_pdu_size. The extended size buffers are only allocated on a
per-socket/connection base when needed at run-time.
changes since v2: https://lore.kernel.org/all/20230313172510.3851-1-socketcan@hartkopp.net
- use ARRAY_SIZE() to reference DEFAULT_MAX_PDU_SIZE only at one place
changes since v1: https://lore.kernel.org/all/20230311143446.3183-1-socketcan@hartkopp.net
- limit the minimum 'max_pdu_size' to 4095 to maintain the classic
behavior before ISO 15765-2:2016
Link: https://github.com/raspberrypi/linux/issues/5371
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Link: https://lore.kernel.org/all/20230326115911.15094-1-socketcan@hartkopp.net
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next
Marc Kleine-Budde says:
====================
pull-request: can-next 2023-03-27
The first 2 patches by Geert Uytterhoeven add transceiver support and
improve the error messages in the rcar_canfd driver.
Cai Huoqing contributes 3 patches which remove a redundant call to
pci_clear_master() in the c_can, ctucanfd and kvaser_pciefd driver.
Frank Jungclaus's patch replaces the struct esd_usb_msg with a union
in the esd_usb driver to improve readability.
Markus Schneider-Pargmann contributes 5 patches to improve the
performance in the m_can driver, especially for SPI attached
controllers like the tcan4x5x.
* tag 'linux-can-next-for-6.4-20230327' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next:
can: m_can: Keep interrupts enabled during peripheral read
can: m_can: Disable unused interrupts
can: m_can: Remove double interrupt enable
can: m_can: Always acknowledge all interrupts
can: m_can: Remove repeated check for is_peripheral
can: esd_usb: Improve code readability by means of replacing struct esd_usb_msg with a union
can: kvaser_pciefd: Remove redundant pci_clear_master
can: ctucanfd: Remove redundant pci_clear_master
can: c_can: Remove redundant pci_clear_master
can: rcar_canfd: Improve error messages
can: rcar_canfd: Add transceiver support
====================
Link: https://lore.kernel.org/r/20230327073354.1003134-1-mkl@pengutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Shay Agroskin says:
====================
Add tx push buf len param to ethtool
This patchset adds a new sub-configuration to ethtool get/set queue
params (ethtool -g) called 'tx-push-buf-len'.
This configuration specifies the maximum number of bytes of a
transmitted packet a driver can push directly to the underlying
device ('push' mode). The motivation for pushing some of the bytes to
the device has the advantages of
- Allowing a smart device to take fast actions based on the packet's
header
- Reducing latency for small packets that can be copied completely into
the device
This new param is practically similar to tx-copybreak value that can be
set using ethtool's tunable but conceptually serves a different purpose.
While tx-copybreak is used to reduce the overhead of DMA mapping and
makes no sense to use if less than the whole segment gets copied,
tx-push-buf-len allows to improve performance by analyzing the packet's
data (usually headers) before performing the DMA operation.
The configuration can be queried and set using the commands:
$ ethtool -g [interface]
# ethtool -G [interface] tx-push-buf-len [number of bytes]
This patchset also adds support for the new configuration in ENA driver
for which this parameter ensures efficient resources management on the
device side.
====================
Link: https://lore.kernel.org/r/20230323163610.1281468-1-shayagr@amazon.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
LLQ is auto enabled by the device and disabling it isn't supported on
new ENA generations while on old ones causes sub-optimal performance.
This patch adds advertisement of push-mode when LLQ mode is used, but
rejects an attempt to modify it.
Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The ENA driver allows for two distinct values for the number of bytes
of the packet's payload that can be written directly to the device.
For a value of 224 the driver turns on Large LLQ Header mode in which
the first 224 of the packet's payload are written to the LLQ.
Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
With the ability to modify LLQ entry size, the size of packet's
payload that can be written directly to the device changes.
This patch makes the driver recalculate this information every device
negotiation (also called device reset).
Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Allow configuring the device with large LLQ headers. The Low Latency
Queue (LLQ) allows the driver to write the first N bytes of the packet,
along with the rest of the TX descriptors directly into device (N can be
either 96 or 224 for large LLQ headers configuration).
Having L4 TCP/UDP headers contained in the first 96 bytes of the packet
is required to get maximum performance from the device.
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>
Signed-off-by: David Arinzon <darinzon@amazon.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Move ena_calc_io_queue_size() implementation closer to the file's
beginning so that it can be later called from ena_device_init()
function without adding a function declaration.
Also add an empty line at some spots to separate logical blocks in
funcitons.
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
This attribute, which is part of ethtool's ring param configuration
allows the user to specify the maximum number of the packet's payload
that can be written directly to the device.
Example usage:
# ethtool -G [interface] tx-push-buf-len [number of bytes]
Co-developed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Similar to NL_SET_ERR_MSG_FMT, add a macro which sets netlink policy
error message with a format string.
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can
Marc Kleine-Budde says:
====================
pull-request: can 2023-03-27
Oleksij Rempel and Hillf Danton contribute a patch for the CAN J1939
protocol that prevents a potential deadlock in j1939_sk_errqueue().
Ivan Orlov fixes an uninit-value in the CAN BCM protocol in the
bcm_tx_setup() function.
* tag 'linux-can-fixes-for-6.3-20230327' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can:
can: bcm: bcm_tx_setup(): fix KMSAN uninit-value in vfs_write
can: j1939: prevent deadlock by moving j1939_sk_errqueue()
====================
Link: https://lore.kernel.org/r/20230327124807.1157134-1-mkl@pengutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
clang with W=1 reports
drivers/net/ethernet/qlogic/qed/qed_ll2.c:649:6: error: variable
'num_ooo_add_to_peninsula' set but not used [-Werror,-Wunused-but-set-variable]
u32 num_ooo_add_to_peninsula = 0, cid;
^
This variable is not used so remove it.
Signed-off-by: Tom Rix <trix@redhat.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Link: https://lore.kernel.org/r/20230326001733.1343274-1-trix@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Some MAINTAINERS sections mention to mail patches to the list
linux-nfc@lists.01.org. Probably due to changes on Intel's 01.org website
and servers, the list server lists.01.org/ml01.01.org is simply gone.
Considering emails recorded on lore.kernel.org, only a handful of emails
where sent to the linux-nfc@lists.01.org list, and they are usually also
sent to the netdev mailing list as well, where they are then picked up.
So, there is no big benefit in restoring the linux-nfc elsewhere.
Remove all occurrences of the linux-nfc@lists.01.org list in MAINTAINERS.
Suggested-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Link: https://lore.kernel.org/all/CAKXUXMzggxQ43DUZZRkPMGdo5WkzgA=i14ySJUFw4kZfE5ZaZA@mail.gmail.com/
Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Link: https://lore.kernel.org/r/20230324081613.32000-1-lukas.bulwahn@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
I've read through or reworked a good portion of this driver. Add myself
as a reviewer.
Signed-off-by: Sean Anderson <sean.anderson@seco.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Acked-by: Madalin Bucur <madalin.bucur@oss.nxp.com>
Link: https://lore.kernel.org/r/20230323145957.2999211-1-sean.anderson@seco.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Currently, MAX_SKB_FRAGS value is 17.
For standard tcp sendmsg() traffic, no big deal because tcp_sendmsg()
attempts order-3 allocations, stuffing 32768 bytes per frag.
But with zero copy, we use order-0 pages.
For BIG TCP to show its full potential, we add a config option
to be able to fit up to 45 segments per skb.
This is also needed for BIG TCP rx zerocopy, as zerocopy currently
does not support skbs with frag list.
We have used MAX_SKB_FRAGS=45 value for years at Google before
we deployed 4K MTU, with no adverse effect, other than
a recent issue in mlx4, fixed in commit 26782aad00cc
("net/mlx4: MLX4_TX_BOUNCE_BUFFER_SIZE depends on MAX_SKB_FRAGS")
Back then, goal was to be able to receive full size (64KB) GRO
packets without the frag_list overhead.
Note that /proc/sys/net/core/max_skb_frags can also be used to limit
the number of fragments TCP can use in tx packets.
By default we keep the old/legacy value of 17 until we get
more coverage for the updated values.
Sizes of struct skb_shared_info on 64bit arches
MAX_SKB_FRAGS | sizeof(struct skb_shared_info):
==============================================
17 320
21 320+64 = 384
25 320+128 = 448
29 320+192 = 512
33 320+256 = 576
37 320+320 = 640
41 320+384 = 704
45 320+448 = 768
This inflation might cause problems for drivers assuming they could pack
both the incoming packet (for MTU=1500) and skb_shared_info in half a page,
using build_skb().
v3: fix build error when CONFIG_NET=n
v2: fix two build errors assuming MAX_SKB_FRAGS was "unsigned long"
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org>
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
Link: https://lore.kernel.org/r/20230323162842.1935061-1-eric.dumazet@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Pull kvm fixes from Paolo Bonzini:
"RISC-V:
- Fix VM hang in case of timer delta being zero
ARM:
- MMU fixes:
- Read the MMU notifier seq before dropping the mmap lock to guard
against reading a potentially stale VMA
- Disable interrupts when walking user page tables to protect
against the page table being freed
- Read the MTE permissions for the VMA within the mmap lock
critical section, avoiding the use of a potentally stale VMA
pointer
- vPMU fixes:
- Return the sum of the current perf event value and PMC snapshot
for reads from userspace
- Don't save the value of guest writes to PMCR_EL0.{C,P}, which
could otherwise lead to userspace erroneously resetting the vPMU
during VM save/restore"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
riscv/kvm: Fix VM hang in case of timer delta being zero.
KVM: arm64: Check for kvm_vma_mte_allowed in the critical section
KVM: arm64: Disable interrupts while walking userspace PTs
KVM: arm64: Retry fault if vma_lookup() results become invalid
KVM: arm64: PMU: Don't save PMCR_EL0.{C,P} for the vCPU
KVM: arm64: PMU: Fix GET_ONE_REG for vPMC regs to return the current value
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86
Pull x86 platform driver fixes from Hans de Goede:
- Intel tpmi/vsec fixes
- think-lmi fixes
- two other small fixes / hw-id additions
* tag 'platform-drivers-x86-v6.3-3' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86:
platform/surface: aggregator: Add missing fwnode_handle_put()
platform/x86: think-lmi: Add possible_values for ThinkStation
platform/x86: think-lmi: only display possible_values if available
platform/x86: think-lmi: use correct possible_values delimiters
platform/x86: think-lmi: add missing type attribute
platform/x86 (gigabyte-wmi): Add support for A320M-S2H V2
platform/x86/intel: tpmi: Revise the comment of intel_vsec_add_aux
platform/x86/intel: tpmi: Fix double free in tpmi_create_device()
platform/x86/intel: vsec: Fix a memory leak in intel_vsec_add_aux
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux
Pull MTD fixes from Miquel Raynal:
"Raw NAND controller driver fixes:
- meson:
- Invalidate cache on polling ECC bit
- Initialize struct with zeroes
- nandsim: Artificially prevent sequential page reads
ECC engine driver fixes:
- mxic-ecc: Fix mxic_ecc_data_xfer_wait_for_completion() when irq is
used
Binging fixes:
- jedec,spi-nor: Document CPOL/CPHA support"
* tag 'mtd/fixes-for-6.3-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux:
mtd: rawnand: meson: invalidate cache on polling ECC bit
mtd: rawnand: nandsim: Artificially prevent sequential page reads
dt-bindings: mtd: jedec,spi-nor: Document CPOL/CPHA support
mtd: nand: mxic-ecc: Fix mxic_ecc_data_xfer_wait_for_completion() when irq is used
mtd: rawnand: meson: initialize struct with zeroes
|
|
Return -EFAULT if put_user() for the PTRACE_GET_LAST_BREAK
request fails, instead of silently ignoring it.
Reviewed-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Expolines depend on scripts/basic/fixdep. And build of expolines can now
race with the fixdep build:
make[1]: *** Deleting file 'arch/s390/lib/expoline/expoline.o'
/bin/sh: line 1: scripts/basic/fixdep: Permission denied
make[1]: *** [../scripts/Makefile.build:385: arch/s390/lib/expoline/expoline.o] Error 126
make: *** [../arch/s390/Makefile:166: expoline_prepare] Error 2
The dependence was removed in the below Fixes: commit. So reintroduce
the dependence on scripts.
Fixes: a0b0987a7811 ("s390/nospec: remove unneeded header includes")
Cc: Joe Lawrence <joe.lawrence@redhat.com>
Cc: stable@vger.kernel.org
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
Signed-off-by: Jiri Slaby (SUSE) <jirislaby@kernel.org>
Link: https://lore.kernel.org/r/20230316112809.7903-1-jirislaby@kernel.org
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
The device release callback function invoked to release the matrix device
uses the dev_get_drvdata(device *dev) function to retrieve the
pointer to the vfio_matrix_dev object in order to free its storage. The
problem is, this object is not stored as drvdata with the device; since the
kfree function will accept a NULL pointer, the memory for the
vfio_matrix_dev object is never freed.
Since the device being released is contained within the vfio_matrix_dev
object, the container_of macro will be used to retrieve its pointer.
Fixes: 1fde573413b5 ("s390: vfio-ap: base implementation of VFIO AP device driver")
Signed-off-by: Tony Krowiak <akrowiak@linux.ibm.com>
Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Link: https://lore.kernel.org/r/20230320150447.34557-1-akrowiak@linux.ibm.com
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
Add missing earlyclobber annotation to size, to, and tmp2 operands of the
__clear_user() inline assembly since they are modified or written to before
the last usage of all input operands. This can lead to incorrect register
allocation for the inline assembly.
Fixes: 6c2a9e6df604 ("[S390] Use alternative user-copy operations for new hardware.")
Reported-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/all/20230321122514.1743889-3-mark.rutland@arm.com/
Cc: stable@vger.kernel.org
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
|
|
HEAD
KVM/riscv fixes for 6.3, take #1
- Fix VM hang in case of timer delta being zero
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD
KVM/arm64 fixes for 6.3, part #2
Fixes for a rather interesting set of bugs relating to the MMU:
- Read the MMU notifier seq before dropping the mmap lock to guard
against reading a potentially stale VMA
- Disable interrupts when walking user page tables to protect against
the page table being freed
- Read the MTE permissions for the VMA within the mmap lock critical
section, avoiding the use of a potentally stale VMA pointer
Additionally, some fixes targeting the vPMU:
- Return the sum of the current perf event value and PMC snapshot for
reads from userspace
- Don't save the value of guest writes to PMCR_EL0.{C,P}, which could
otherwise lead to userspace erroneously resetting the vPMU during VM
save/restore
|
|
Syzkaller reported the following issue:
=====================================================
BUG: KMSAN: uninit-value in aio_rw_done fs/aio.c:1520 [inline]
BUG: KMSAN: uninit-value in aio_write+0x899/0x950 fs/aio.c:1600
aio_rw_done fs/aio.c:1520 [inline]
aio_write+0x899/0x950 fs/aio.c:1600
io_submit_one+0x1d1c/0x3bf0 fs/aio.c:2019
__do_sys_io_submit fs/aio.c:2078 [inline]
__se_sys_io_submit+0x293/0x770 fs/aio.c:2048
__x64_sys_io_submit+0x92/0xd0 fs/aio.c:2048
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
Uninit was created at:
slab_post_alloc_hook mm/slab.h:766 [inline]
slab_alloc_node mm/slub.c:3452 [inline]
__kmem_cache_alloc_node+0x71f/0xce0 mm/slub.c:3491
__do_kmalloc_node mm/slab_common.c:967 [inline]
__kmalloc+0x11d/0x3b0 mm/slab_common.c:981
kmalloc_array include/linux/slab.h:636 [inline]
bcm_tx_setup+0x80e/0x29d0 net/can/bcm.c:930
bcm_sendmsg+0x3a2/0xce0 net/can/bcm.c:1351
sock_sendmsg_nosec net/socket.c:714 [inline]
sock_sendmsg net/socket.c:734 [inline]
sock_write_iter+0x495/0x5e0 net/socket.c:1108
call_write_iter include/linux/fs.h:2189 [inline]
aio_write+0x63a/0x950 fs/aio.c:1600
io_submit_one+0x1d1c/0x3bf0 fs/aio.c:2019
__do_sys_io_submit fs/aio.c:2078 [inline]
__se_sys_io_submit+0x293/0x770 fs/aio.c:2048
__x64_sys_io_submit+0x92/0xd0 fs/aio.c:2048
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
CPU: 1 PID: 5034 Comm: syz-executor350 Not tainted 6.2.0-rc6-syzkaller-80422-geda666ff2276 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/12/2023
=====================================================
We can follow the call chain and find that 'bcm_tx_setup' function
calls 'memcpy_from_msg' to copy some content to the newly allocated
frame of 'op->frames'. After that the 'len' field of copied structure
being compared with some constant value (64 or 8). However, if
'memcpy_from_msg' returns an error, we will compare some uninitialized
memory. This triggers 'uninit-value' issue.
This patch will add 'memcpy_from_msg' possible errors processing to
avoid uninit-value issue.
Tested via syzkaller
Reported-by: syzbot+c9bfd85eca611ebf5db1@syzkaller.appspotmail.com
Link: https://syzkaller.appspot.com/bug?id=47f897f8ad958bbde5790ebf389b5e7e0a345089
Signed-off-by: Ivan Orlov <ivan.orlov0322@gmail.com>
Fixes: 6f3b911d5f29b ("can: bcm: add support for CAN FD frames")
Acked-by: Oliver Hartkopp <socketcan@hartkopp.net>
Link: https://lore.kernel.org/all/20230314120445.12407-1-ivan.orlov0322@gmail.com
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
If we fail to adjust the GuC run-control on opening the perf stream,
make sure we unwind the wakeref just taken.
v2: Retain old goto label names (Ashutosh)
v3: Drop bitfield boolean
Fixes: 01e742746785 ("drm/i915/guc: Support OA when Wa_16011777198 is enabled")
Signed-off-by: Chris Wilson <chris.p.wilson@linux.intel.com>
Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230323225901.3743681-2-umesh.nerlige.ramappa@intel.com
(cherry picked from commit 2810ac6c753d17ee2572ffb57fe2382a786a080a)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
Currently i915_gem_object_is_framebuffer() doesn't treat the
BO containing the framebuffer's DPT as a framebuffer itself.
This means eg. that the shrinker can evict the DPT BO while
leaving the actual FB BO bound, when the DPT is allocated
from regular shmem.
That causes an immediate oops during hibernate as we
try to rewrite the PTEs inside the already evicted
DPT obj.
TODO: presumably this might also be the reason for the
DPT related display faults under heavy memory pressure,
but I'm still not sure how that would happen as the object
should be pinned by intel_dpt_pin() while in active use by
the display engine...
Cc: stable@vger.kernel.org
Cc: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Fixes: 0dc987b699ce ("drm/i915/display: Add smem fallback allocation for dpt")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230320090522.9909-2-ville.syrjala@linux.intel.com
Reviewed-by: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com>
(cherry picked from commit 779cb5ba64ec7df80675a956c9022929514f517a)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
i915_gem_object_create_lmem_from_data() lacks the flush of the data
written to lmem to ensure the object is marked as dirty and the writes
flushed to the backing store. Once created, we can immediately release
the obj->mm.mapping caching of the vmap.
Fixes: 7acbbc7cf485 ("drm/i915/guc: put all guc objects in lmem when available")
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Andi Shyti <andi.shyti@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Signed-off-by: Chris Wilson <chris.p.wilson@linux.intel.com>
Cc: <stable@vger.kernel.org> # v5.16+
Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
Reviewed-by: Nirmoy Das <nirmoy.das@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230316165918.13074-1-nirmoy.das@intel.com
(cherry picked from commit e2ee10474ce766686e7a7496585cdfaf79e3a1bf)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
The commit renaming icl_tc_phy_is_in_safe_mode() to
icl_tc_phy_take_ownership() didn't flip the function's return value
accordingly, fix this up.
This didn't cause an actual problem besides state check errors, since
the function is only used during HW readout.
Cc: José Roberto de Souza <jose.souza@intel.com>
Fixes: f53979d68a77 ("drm/i915/display/tc: Rename safe_mode functions ownership")
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230316131724.359612-4-imre.deak@intel.com
(cherry picked from commit f2c7959dda614d9b7c6a41510492de39d31705ec)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
Keeping DC states enabled is incompatible with the _noarm()/_arm()
split we use for writing pipe/plane registers. When DC5 and PSR
are enabled, all pipe/plane registers effectively become self-arming
on account of DC5 exit arming the update, and PSR exit latching it.
What probably saves us most of the time is that (with PIPE_MISC[21]=0)
all pipe register writes themselves trigger PSR exit, and then
we don't re-enter PSR until the idle frame count has elapsed.
So it may be that the PSR exit happens already before we've
updated the state too much.
Also the PSR1 panel (at least on this KBL) seems to discard the first
frame we trasmit, presumably still scanning out from its internal
framebuffer at that point. So only the second frame we transmit is
actually visible. But I suppose that could also be panel specific
behaviour. I haven't checked out how other PSR panels behave, nor
did I bother to check what the eDP spec has to say about this.
And since this really is all about DC states, let's switch from
the MODESET domain to the DC_OFF domain. Functionally they are
100% identical. We should probably remove the MODESET domain...
And for good measure let's toss in an assert to the place where
we do the _noarm() register writes to make sure DC states are
in fact off.
v2: Just use intel_display_power_is_enabled() (Imre)
Cc: <stable@vger.kernel.org> #v5.17+
Cc: Manasi Navare <navaremanasi@google.com>
Cc: Drew Davenport <ddavenport@chromium.org>
Cc: Jouni Högander <jouni.hogander@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Fixes: d13dde449580 ("drm/i915: Split pipe+output CSC programming to noarm+arm pair")
Fixes: f8a005eb8972 ("drm/i915: Optimize icl+ universal plane programming")
Fixes: 890b6ec4a522 ("drm/i915: Split skl+ plane update into noarm+arm pair")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230320183532.17727-1-ville.syrjala@linux.intel.com
(cherry picked from commit 41b4c7fe72b6105a4b49395eea9aa40cef94288d)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
Unlike SKL/GLK the ICL CSC unit suffers from a new issue where
CSC_MODE arming is sticky. That is, once armed it remains armed
causing the CSC coeff/offset registers to become effectively
self-arming.
CSC coeff/offset registers writes no longer disarm the CSC,
but fortunately register read still do. So we can use that
to disarm the CSC unit once the registers for the current
frame have been latched. This avoid s the self-arming behaviour
from persisting into the next frame's .color_commit_noarm()
call.
Cc: <stable@vger.kernel.org> #v5.19+
Cc: Manasi Navare <navaremanasi@google.com>
Cc: Drew Davenport <ddavenport@chromium.org>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Jouni Högander <jouni.hogander@intel.com>
Fixes: d13dde449580 ("drm/i915: Split pipe+output CSC programming to noarm+arm pair")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230320095438.17328-5-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
(cherry picked from commit 92736f1b452bbb8a66bdb5b1d263ad00e04dd3b8)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
We're going to need stuff after the color management
register latching has happened. Add a corresponding hook.
Cc: <stable@vger.kernel.org> #v5.19+
Cc: Manasi Navare <navaremanasi@google.com>
Cc: Drew Davenport <ddavenport@chromium.org>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Jouni Högander <jouni.hogander@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230320095438.17328-4-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
(cherry picked from commit 3962ca4e080a525fc9eae87aa6b2286f1fae351d)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
skl/glk
SKL/GLK CSC unit suffers from a nasty issue where a CSC
coeff/offset register read or write between DC5 exit and
PSR exit will undo the CSC arming performed by DMC, and
then during PSR exit the hardware will latch zeroes into
the active CSC registers. This causes any plane going
through the CSC to output all black.
We can sidestep the issue by making sure the PSR exit has
already actually happened before we touch the CSC coeff/offset
registers. Easiest way to guarantee that is to just move the
CSC programming back into the .color_commir_arm() as we force
a PSR exit (and crucially wait for it to actually happen)
prior to touching the arming registers.
When PSR (and thus also DC states) are disabled we don't
have anything to worry about, so we can keep using the
more optional _noarm() hook for writing the CSC registers.
Cc: <stable@vger.kernel.org> #v5.19+
Cc: Manasi Navare <navaremanasi@google.com>
Cc: Drew Davenport <ddavenport@chromium.org>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Jouni Högander <jouni.hogander@intel.com>
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/8283
Fixes: d13dde449580 ("drm/i915: Split pipe+output CSC programming to noarm+arm pair")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230320095438.17328-3-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
(cherry picked from commit 80a892a4c2428b65366721599fc5fe50eaed35fd)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
We're going to want different behavior for skl/glk vs. icl
in .color_commit_noarm(), so split the hook into two. Arguably
we already had slightly different behaviour since
csc_enable/gamma_enable are never set on icl+, so the old
code was perhaps a bit confusing as well.
Cc: <stable@vger.kernel.org> #v5.19+
Cc: Manasi Navare <navaremanasi@google.com>
Cc: Drew Davenport <ddavenport@chromium.org>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Jouni Högander <jouni.hogander@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230320095438.17328-2-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
(cherry picked from commit f161eb01f50ab31f2084975b43bce54b7b671e17)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
Expose intel_rps_read_actual_frequency_fw to read the actual freq without
taking forcewake for use by PMU. The code is refactored to use a common set
of functions across sysfs and PMU. Using common functions with sysfs in PMU
solves the issues of missing support for MTL and missing support for older
generations (prior to Gen6). It also future proofs the PMU where sometimes
code has been updated for sysfs and PMU has been missed.
v2: Remove runtime_pm_if_in_use from read_actual_frequency_fw (Tvrtko)
v3: (Tvrtko)
- Remove goto in __read_cagf
- Unexport intel_rps_get_cagf and intel_rps_read_punit_req
Fixes: 22009b6dad66 ("drm/i915/mtl: Modify CAGF functions for MTL")
Link: https://gitlab.freedesktop.org/drm/intel/-/issues/8280
Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230316004800.2539753-1-ashutosh.dixit@intel.com
(cherry picked from commit 44df42e66139b5fac8db49ee354be279210f9816)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
The blamed commit has introduced the following tests to
dwmac4_add_hw_vlan_rx_fltr(), called from stmmac_vlan_rx_add_vid():
if (hw->promisc) {
netdev_err(dev,
"Adding VLAN in promisc mode not supported\n");
return -EPERM;
}
"VLAN promiscuous" mode is keyed in this driver to IFF_PROMISC, and so,
vlan_vid_add() and vlan_vid_del() calls cannot take place in IFF_PROMISC
mode. I have the following 2 arguments that this restriction is.... hm,
how shall I put it nicely... unproductive :)
First, take the case of a Linux bridge. If the kernel is compiled with
CONFIG_BRIDGE_VLAN_FILTERING=y, then this bridge shall have a VLAN
database. The bridge shall try to call vlan_add_vid() on its bridge
ports for each VLAN in the VLAN table. It will do this irrespectively of
whether that port is *currently* VLAN-aware or not. So it will do this
even when the bridge was created with vlan_filtering 0.
But the Linux bridge, in VLAN-unaware mode, configures its ports in
promiscuous (IFF_PROMISC) mode, so that they accept packets with any
MAC DA (a switch must do this in order to forward those packets which
are not directly targeted to its MAC address).
As a result, the stmmac driver does not work as a bridge port, when the
kernel is compiled with CONFIG_BRIDGE_VLAN_FILTERING=y.
$ ip link add br0 type bridge && ip link set br0 up
$ ip link set eth0 master br0 && ip link set eth0 up
[ 2333.943296] br0: port 1(eth0) entered blocking state
[ 2333.943381] br0: port 1(eth0) entered disabled state
[ 2333.943782] device eth0 entered promiscuous mode
[ 2333.944080] 4033c000.ethernet eth0: Adding VLAN in promisc mode not supported
[ 2333.976509] 4033c000.ethernet eth0: failed to initialize vlan filtering on this port
RTNETLINK answers: Operation not permitted
Secondly, take the case of stmmac as DSA master. Some switch tagging
protocols are based on 802.1Q VLANs (tag_sja1105.c), and as such,
tag_8021q.c uses vlan_vid_add() to work with VLAN-filtering DSA masters.
But also, when a DSA port becomes promiscuous (for example when it joins
a bridge), the DSA framework also makes the DSA master promiscuous.
Moreover, for every VLAN that a DSA switch sends to the CPU, DSA also
programs a VLAN filter on the DSA master, because if the the DSA switch
uses a tail tag, then the hardware frame parser of the DSA master will
see VLAN as VLAN, and might filter them out, for being unknown.
Due to the above 2 reasons, my belief is that the stmmac driver does not
get to choose to not accept vlan_vid_add() calls while IFF_PROMISC is
enabled, because the 2 are completely independent and there are code
paths in the network stack which directly lead to this situation
occurring, without the user's direct input.
In fact, my belief is that "VLAN promiscuous" mode should have never
been keyed on IFF_PROMISC in the first place, but rather, on the
NETIF_F_HW_VLAN_CTAG_FILTER feature flag which can be toggled by the
user through ethtool -k, when present in netdev->hw_features.
In the stmmac driver, NETIF_F_HW_VLAN_CTAG_FILTER is only present in
"features", making this feature "on [fixed]".
I have this belief because I am unaware of any definition of promiscuity
which implies having an effect on anything other than MAC DA (therefore
not VLAN). However, I seem to be rather alone in having this opinion,
looking back at the disagreements from this discussion:
https://lore.kernel.org/netdev/20201110153958.ci5ekor3o2ekg3ky@ipetronik.com/
In any case, to remove the vlan_vid_add() dependency on !IFF_PROMISC,
one would need to remove the check and see what fails. I guess the test
was there because of the way in which dwmac4_vlan_promisc_enable() is
implemented.
For context, the dwmac4 supports Perfect Filtering for a limited number
of VLANs - dwmac4_get_num_vlan(), priv->hw->num_vlan, with a fallback on
Hash Filtering - priv->dma_cap.vlhash - see stmmac_vlan_update(), also
visible in cat /sys/kernel/debug/stmmaceth/eth0/dma_cap | grep 'VLAN
Hash Filtering'.
The perfect filtering is based on MAC_VLAN_Tag_Filter/MAC_VLAN_Tag_Data
registers, accessed in the driver through dwmac4_write_vlan_filter().
The hash filtering is based on the MAC_VLAN_Hash_Table register, named
GMAC_VLAN_HASH_TABLE in the driver and accessed by dwmac4_update_vlan_hash().
The control bit for enabling hash filtering is GMAC_VLAN_VTHM
(MAC_VLAN_Tag_Ctrl bit VTHM: VLAN Tag Hash Table Match Enable).
Now, the description of dwmac4_vlan_promisc_enable() is that it iterates
through the driver's cache of perfect filter entries (hw->vlan_filter[i],
added by dwmac4_add_hw_vlan_rx_fltr()), and evicts them from hardware by
unsetting their GMAC_VLAN_TAG_DATA_VEN (MAC_VLAN_Tag_Data bit VEN - VLAN
Tag Enable) bit. Then it unsets the GMAC_VLAN_VTHM bit, which disables
hash matching.
This leaves the MAC, according to table "VLAN Match Status" from the
documentation, to always enter these data paths:
VID |VLAN Perfect Filter |VTHM Bit |VLAN Hash Filter |Final VLAN Match
|Match Result | |Match Result |Status
-------|--------------------|---------|-----------------|----------------
VID!=0 |Fail |0 |don't care |Pass
So, dwmac4_vlan_promisc_enable() does its job, but by unsetting
GMAC_VLAN_VTHM, it conflicts with the other code path which controls
this bit: dwmac4_update_vlan_hash(), called through stmmac_update_vlan_hash()
from stmmac_vlan_rx_add_vid() and from stmmac_vlan_rx_kill_vid().
This is, I guess, why dwmac4_add_hw_vlan_rx_fltr() is not allowed to run
after dwmac4_vlan_promisc_enable() has unset GMAC_VLAN_VTHM: because if
it did, then dwmac4_update_vlan_hash() would set GMAC_VLAN_VTHM again,
breaking the "VLAN promiscuity".
It turns out that dwmac4_vlan_promisc_enable() is way too complicated
for what needs to be done. The MAC_Packet_Filter register also has the
VTFE bit (VLAN Tag Filter Enable), which simply controls whether VLAN
tagged packets which don't match the filtering tables (either perfect or
hash) are dropped or not. At the moment, this driver unconditionally
sets GMAC_PACKET_FILTER_VTFE if NETIF_F_HW_VLAN_CTAG_FILTER was detected
through the priv->dma_cap.vlhash capability bits of the device, in
stmmac_dvr_probe().
I would suggest deleting the unnecessarily complex logic from
dwmac4_vlan_promisc_enable(), and simply unsetting GMAC_PACKET_FILTER_VTFE
when becoming IFF_PROMISC, which has the same effect of allowing packets
with any VLAN tags, but has the additional benefit of being able to run
concurrently with stmmac_vlan_rx_add_vid() and stmmac_vlan_rx_kill_vid().
As much as I believe that the VTFE bit should have been exclusively
controlled by NETIF_F_HW_VLAN_CTAG_FILTER through ethtool, and not by
IFF_PROMISC, changing that is not a punctual fix to the problem, and it
would probably break the VFFQ feature added by the later commit
e0f9956a3862 ("net: stmmac: Add option for VLAN filter fail queue
enable"). From the commit description, VFFQ needs IFF_PROMISC=on and
VTFE=off in order to work (and this change respects that). But if VTFE
was changed to be controlled through ethtool -k, then a user-visible
change would have been introduced in Intel's scripts (a need to run
"ethtool -k eth0 rx-vlan-filter off" which did not exist before).
The patch was tested with this set of commands:
ip link set eth0 up
ip link add link eth0 name eth0.100 type vlan id 100
ip addr add 192.168.100.2/24 dev eth0.100 && ip link set eth0.100 up
ip link set eth0 promisc on
ip link add link eth0 name eth0.101 type vlan id 101
ip addr add 192.168.101.2/24 dev eth0.101 && ip link set eth0.101 up
ip link set eth0 promisc off
ping -c 5 192.168.100.1
ping -c 5 192.168.101.1
ip link set eth0 promisc on
ping -c 5 192.168.100.1
ping -c 5 192.168.101.1
ip link del eth0.100
ip link del eth0.101
# Wait for VLAN-tagged pings from the other end...
# Check with "tcpdump -i eth0 -e -n -p" and we should see them
ip link set eth0 promisc off
# Wait for VLAN-tagged pings from the other end...
# Check with "tcpdump -i eth0 -e -n -p" and we shouldn't see them
# anymore, but remove the "-p" argument from tcpdump and they're there.
Fixes: c89f44ff10fd ("net: stmmac: Add support for VLAN promiscuous mode")
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This commit addresses a deadlock situation that can occur in certain
scenarios, such as when running data TP/ETP transfer and subscribing to
the error queue while receiving a net down event. The deadlock involves
locks in the following order:
3
j1939_session_list_lock -> active_session_list_lock
j1939_session_activate
...
j1939_sk_queue_activate_next -> sk_session_queue_lock
...
j1939_xtp_rx_eoma_one
2
j1939_sk_queue_drop_all -> sk_session_queue_lock
...
j1939_sk_netdev_event_netdown -> j1939_socks_lock
j1939_netdev_notify
1
j1939_sk_errqueue -> j1939_socks_lock
__j1939_session_cancel -> active_session_list_lock
j1939_tp_rxtimer
CPU0 CPU1
---- ----
lock(&priv->active_session_list_lock);
lock(&jsk->sk_session_queue_lock);
lock(&priv->active_session_list_lock);
lock(&priv->j1939_socks_lock);
The solution implemented in this commit is to move the
j1939_sk_errqueue() call out of the active_session_list_lock context,
thus preventing the deadlock situation.
Reported-by: syzbot+ee1cd780f69483a8616b@syzkaller.appspotmail.com
Fixes: 5b9272e93f2e ("can: j1939: extend UAPI to notify about RX status")
Co-developed-by: Hillf Danton <hdanton@sina.com>
Signed-off-by: Hillf Danton <hdanton@sina.com>
Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de>
Link: https://lore.kernel.org/all/20230324130141.2132787-1-o.rempel@pengutronix.de
Cc: stable@vger.kernel.org
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
This fixes the following warning when compiled with GCC 12.2.0 and W=1.
net/core/dev_ioctl.c:475: warning: Function parameter or member 'data'
not described in 'dev_ioctl'
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use devm_clk_get_optional_enabled to simplify the code.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
I was a bit too optimistic in commit bf51d27704c9 ("tools: ynl: fix
get_mask utility routine"), not every mask we use is necessarily
coming from an enum of type "flags". We also allow flipping an
enum into flags on per-attribute basis. That's done by
the 'enum-as-flags' property of an attribute.
Restore this functionality, it's not currently used by any in-tree
family.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Other tests set up the connection fully on both ends before
communicating any data. Add a test which will queue up TLS
records to TCP before the TLS ULP is installed.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
While testing the tool I noticed we miss the u16 type on payload create.
On the code inspection it turned out we miss also u64 - add them.
We also miss the decoding of u16 despite the fact `NlAttr` class
supports it - add it.
Signed-off-by: Michal Michalik <michal.michalik@intel.com>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sean Anderson says:
====================
net: sunhme: Probe/IRQ cleanups
Well, I've had these patches kicking around in my tree since last
October, so I guess I had better get around to posting them. This
series is mainly a cleanup/consolidation of the probe process, with
some interrupt changes as well. Some of these changes are SBUS- (AKA
SPARC-) specific, so this should really get some testing there as well
to ensure nothing breaks. I've CC'd a few SPARC mailing lists in hopes
that someone there can try this out. I also have an SBUS card I
ordered by mistake if anyone has a SPARC computer but lacks this card.
Changes in v4:
- Tweak variable order for yuletide
- Move uninitialized return to its own commit
- Use correct SBUS/PCI accessors
- Rework hme_version to set the default in pci/sbus_probe and override it (if
necessary) in common_probe
Changes in v3:
- Incorperate a fix from another series into this commit
Changes in v2:
- Move happy_meal_begin_auto_negotiation earlier and remove forward declaration
- Make some more includes common
- Clean up mac address init
- Inline error returns
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Most of the second half of the PCI/SBUS probe functions are the same.
Consolidate them into a common function.
Signed-off-by: Sean Anderson <seanga2@gmail.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The err_out label used to have cleanup. Now that it just returns, inline it
everywhere.
Signed-off-by: Sean Anderson <seanga2@gmail.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Clean up some oddities suggested during review.
Signed-off-by: Sean Anderson <seanga2@gmail.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The mac address initialization is braodly the same between PCI and SBUS,
and one was clearly copied from the other. Consolidate them. We still have
to have some ifdefs because pci_(un)map_rom is only implemented for PCI,
and idprom is only implemented for SPARC.
Signed-off-by: Sean Anderson <seanga2@gmail.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The PCI half of this driver was converted in commit 914d9b2711dd ("sunhme:
switch to devres"). Do the same for the SBUS half.
Signed-off-by: Sean Anderson <seanga2@gmail.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|