Age | Commit message (Collapse) | Author |
|
The Renesas RZ/G3E SMARC EVK uses KSZ9131RNXC phy. On deep power state,
PHY loses the power and on wakeup the rgmii delays are not reconfigured
causing it to fail.
Replace the callback kszphy_resume()->ksz9131_resume() for reconfiguring
the rgmii_delay when it exits from PM suspend state.
Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://patch.msgid.link/20250711054029.48536-1-biju.das.jz@bp.renesas.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Jordan Rife says:
====================
bpf: tcp: Exactly-once socket iteration
TCP socket iterators use iter->offset to track progress through a
bucket, which is a measure of the number of matching sockets from the
current bucket that have been seen or processed by the iterator. On
subsequent iterations, if the current bucket has unprocessed items, we
skip at least iter->offset matching items in the bucket before adding
any remaining items to the next batch. However, iter->offset isn't
always an accurate measure of "things already seen" when the underlying
bucket changes between reads, which can lead to repeated or skipped
sockets. Instead, this series remembers the cookies of the sockets we
haven't seen yet in the current bucket and resumes from the first cookie
in that list that we can find on the next iteration.
This is a continuation of the work started in [1]. This series largely
replicates the patterns applied to UDP socket iterators, applying them
instead to TCP socket iterators.
CHANGES
=======
v5 -> v6:
* In patch ten ("selftests/bpf: Create established sockets in socket
iterator tests"), use poll() to choose a socket that has a connection
ready to be accept()ed. Before, connect_to_server would set the
O_NONBLOCK flag on all listening sockets so that accept_from_one could
loop through them all and find the one that connect_to_addr_str
connected to. However, this is subtly buggy and could potentially lead
to test flakes, since the 3 way handshake isn't necessarily done when
connect returns, so it's possible none of the accept() calls succeed.
Use poll() instead to guarantee that the socket we accept() from is
ready and eliminate the need for the O_NONBLOCK flag (Martin).
v4 -> v5:
* Move WARN_ON_ONCE before the `done` label in patch two ("bpf: tcp:
Make sure iter->batch always contains a full bucket snapshot"")
(Martin).
* Remove unnecessary kfunc declaration in patch eleven ("selftests/bpf:
Create iter_tcp_destroy test program") (Martin).
* Make sure to close the socket fd at the end of `destroy` in patch
twelve ("selftests/bpf: Add tests for bucket resume logic in
established sockets") (Martin).
v3 -> v4:
* Drop braces around sk_nulls_for_each_from in patch five ("bpf: tcp:
Avoid socket skips and repeats during iteration") (Stanislav).
* Add a break after the TCP_SEQ_STATE_ESTABLISHED case in patch five
(Stanislav).
* Add an `if (sock_type == SOCK_STREAM)` check before assigning
TCP_LISTEN to skel->rodata->ss in patch eight ("selftests/bpf: Allow
for iteration over multiple states") to more clearly express the
intent that the option is only consumed for SOCK_STREAM tests
(Stanislav).
* Move the `i = 0` assignment into the for loop in patch ten
("selftests/bpf: Create established sockets in socket iterator
tests") (Stanislav).
v2 -> v3:
* Unroll the loop inside bpf_iter_tcp_batch to make the logic easier to
follow in patch two ("bpf: tcp: Make sure iter->batch always contains
a full bucket snapshot"). This gets rid of the `resizes` variable from
v2 and eliminates the extra conditional that checks how many batch
resize attempts have occurred so far (Stanislav).
Note: This changes the behavior slightly. Before, in the case that
the second call to tcp_seek_last_pos (and later bpf_iter_tcp_resume)
advances to a new bucket, which may happen if the current bucket is
emptied after releasing its lock, the `resizes` "budget" would be
reset, the net effect being that we would try a batch resize with
GFP_USER at most once per bucket. Now, we try to resize the batch
with GFP_USER at most once per call, so it makes it slightly more
likely that we hit the GFP_NOWAIT scenario. However, this edge case
should be rare in practice anyway, and the new behavior is more or
less consistent with the original retry logic, so avoid the loop and
prefer code clarity.
* Move the call to bpf_iter_tcp_put_batch out of
bpf_iter_tcp_realloc_batch and call it directly before invoking
bpf_iter_tcp_realloc_batch with GFP_USER inside bpf_iter_tcp_batch.
/Don't/ call it before invoking bpf_iter_tcp_realloc_batch the second
time while we hold the lock with GFP_NOWAIT. This avoids a conditional
inside bpf_iter_tcp_realloc_batch from v2 that only calls
bpf_iter_tcp_put_batch if flags != GFP_NOWAIT and is a bit more
explicit (Stanislav).
* Adjust patch five ("bpf: tcp: Avoid socket skips and repeats during
iteration") to fit with the new logic in patch two.
v1 -> v2:
* In patch five ("bpf: tcp: Avoid socket skips and repeats during
iteration"), remove unnecessary bucket bounds checks in
bpf_iter_tcp_resume. In either case, if st->bucket is outside the
current table's range then bpf_iter_tcp_resume_* calls *_get_first
which immediately returns NULL anyway and the logic will fall through.
(Martin)
* Add a check at the top of bpf_iter_tcp_resume_listening and
bpf_iter_tcp_resume_established to see if we're done with the current
bucket and advance it immediately instead of wasting time finding the
first matching socket in that bucket with
(listening|established)_get_first. In v1, we originally discussed
adding logic to advance the bucket in bpf_iter_tcp_seq_next and
bpf_iter_tcp_seq_stop, but after trying this the logic seemed harder
to track. Overall, keeping everything inside bpf_iter_tcp_resume_*
seemed a bit clearer. (Martin)
* Instead of using a timeout in the last patch ("selftests/bpf: Add
tests for bucket resume logic in established sockets") to wait for
sockets to leave the ehash table after calling close(), use
bpf_sock_destroy to deterministically destroy and remove them. This
introduces one more patch ("selftests/bpf: Create iter_tcp_destroy
test program") to create the iterator program that destroys a selected
socket. Drive this through a destroy() function in the last patch
which, just like close(), accepts a socket file descriptor. (Martin)
* Introduce one more patch ("selftests/bpf: Allow for iteration over
multiple states") to fix a latent bug in iter_tcp_soreuse where the
sk->sk_state != TCP_LISTEN check was ignored. Add the "ss" variable to
allow test code to configure which socket states to allow.
[1]: https://lore.kernel.org/bpf/20250502161528.264630-1-jordan@jrife.io/
====================
Link: https://patch.msgid.link/20250714180919.127192-1-jordan@jrife.io
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
|
|
Replicate the set of test cases used for UDP socket iterators to test
similar scenarios for TCP established sockets.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Prepare for bucket resume tests for established TCP sockets by creating
a program to immediately destroy and remove sockets from the TCP ehash
table, since close() is not deterministic.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Prepare for bucket resume tests for established TCP sockets by creating
established sockets. Collect socket fds from connect() and accept()
sides and pass them to test cases.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Prepare for bucket resume tests for established TCP sockets by making
the number of ehash buckets configurable. Subsequent patches force all
established sockets into the same bucket by setting ehash_buckets to
one.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Add parentheses around loopback address check to fix up logic and make
the socket state filter configurable for the TCP socket iterators.
Iterators can skip the socket state check by setting ss to 0.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Prepare to test TCP socket iteration over both listening and established
sockets by allowing the BPF iterator programs to skip the port check.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Replicate the set of test cases used for UDP socket iterators to test
similar scenarios for TCP listening sockets.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Replace the offset-based approach for tracking progress through a bucket
in the TCP table with one based on socket cookies. Remember the cookies
of unprocessed sockets from the last batch and use this list to
pick up where we left off or, in the case that the next socket
disappears between reads, find the first socket after that point that
still exists in the bucket and resume from there.
This approach guarantees that all sockets that existed when iteration
began and continue to exist throughout will be visited exactly once.
Sockets that are added to the table during iteration may or may not be
seen, but if they are they will be seen exactly once.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Prepare for the next patch that tracks cookies between iterations by
converting struct sock **batch to union bpf_tcp_iter_batch_item *batch
inside struct bpf_tcp_iter_state.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Get rid of the st_bucket_done field to simplify TCP iterator state and
logic. Before, st_bucket_done could be false if bpf_iter_tcp_batch
returned a partial batch; however, with the last patch ("bpf: tcp: Make
sure iter->batch always contains a full bucket snapshot"),
st_bucket_done == true is equivalent to iter->cur_sk == iter->end_sk.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Require that iter->batch always contains a full bucket snapshot. This
invariant is important to avoid skipping or repeating sockets during
iteration when combined with the next few patches. Before, there were
two cases where a call to bpf_iter_tcp_batch may only capture part of a
bucket:
1. When bpf_iter_tcp_realloc_batch() returns -ENOMEM.
2. When more sockets are added to the bucket while calling
bpf_iter_tcp_realloc_batch(), making the updated batch size
insufficient.
In cases where the batch size only covers part of a bucket, it is
possible to forget which sockets were already visited, especially if we
have to process a bucket in more than two batches. This forces us to
choose between repeating or skipping sockets, so don't allow this:
1. Stop iteration and propagate -ENOMEM up to userspace if reallocation
fails instead of continuing with a partial batch.
2. Try bpf_iter_tcp_realloc_batch() with GFP_USER just as before, but if
we still aren't able to capture the full bucket, call
bpf_iter_tcp_realloc_batch() again while holding the bucket lock to
guarantee the bucket does not change. On the second attempt use
GFP_NOWAIT since we hold onto the spin lock.
I did some manual testing to exercise the code paths where GFP_NOWAIT is
used and where ERR_PTR(err) is returned. I used the realloc test cases
included later in this series to trigger a scenario where a realloc
happens inside bpf_iter_tcp_batch and made a small code tweak to force
the first realloc attempt to allocate a too-small batch, thus requiring
another attempt with GFP_NOWAIT. Some printks showed both reallocs with
the tests passing:
Jun 27 00:00:53 crow kernel: again GFP_USER
Jun 27 00:00:53 crow kernel: again GFP_NOWAIT
Jun 27 00:00:53 crow kernel: again GFP_USER
Jun 27 00:00:53 crow kernel: again GFP_NOWAIT
With this setup, I also forced each of the bpf_iter_tcp_realloc_batch
calls to return -ENOMEM to ensure that iteration ends and that the
read() in userspace fails.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
Prepare for the next patch which needs to be able to choose either
GFP_USER or GFP_NOWAIT for calls to bpf_iter_tcp_realloc_batch.
Signed-off-by: Jordan Rife <jordan@jrife.io>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
|
|
The RDMA driver needs to map its own MMIO regions for the sake of
performance, meaning the IDPF needs to avoid mapping portions of the BAR
space. However, to be HW agnostic, the IDPF cannot assume where
these are and must avoid mapping hard coded regions as much as possible.
The IDPF maps the bare minimum to load and communicate with the
control plane, i.e., the mailbox registers and the reset state
registers. Because of how and when mailbox register offsets are
initialized, it is easier to adjust the existing defines to be relative
to the mailbox region starting address. Use a specific mailbox register
write function that uses these relative offsets. The reset state
register addresses are calculated the same way as for other registers,
described below.
The IDPF then calls a new virtchnl op to fetch a list of MMIO regions
that it should map. The addresses for the registers in these regions are
calculated by determining what region the register resides in, adjusting
the offset to be relative to that region, and then adding the
register's offset to that region's mapped address.
If the new virtchnl op is not supported, the IDPF will fallback to
mapping the whole bar. However, it will still map them as separate
regions outside the mailbox and reset state registers. This way we can
use the same logic in both cases to access the MMIO space.
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
The only event an RDMA vport aux driver cares about right now is an MTU
change on its underlying vport. Implement and plumb the handler to
signal the pre MTU change event and post MTU change events to the RDMA
vport aux driver.
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Implement the idpf_idc_request_reset and idpf_idc_rdma_vc_send_sync
callbacks for the rdma core auxiliary driver to issue reset events to
the idpf and send (synchronous) virtchnl messages to the control plane
respectively.
Implement and plumb the reset handler for the opposite flow as well,
i.e. when the idpf is resetiing and needs to notify the rdma core
auxiliary driver.
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Implement the functions to create, initialize, and destroy an RDMA vport
auxiliary device. The vport aux dev creation is dependent on the
core aux device to call idpf_idc_vport_dev_ctrl to signal that it is
ready for vport aux devices. Implement that core callback to either
create and initialize the vport aux dev or deinitialize.
RDMA vport aux dev creation is also dependent on the control plane to
tell us the vport is RDMA enabled. Add a flag in the create vport
message to signal individual vport RDMA capabilities.
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Add the initial idpf_idc.c file with the functions to kick off the IDC
initialization, create and initialize a core RDMA auxiliary device, and
destroy said device.
The RDMA core has a dependency on the vports being created by the
control plane before it can be initialized. Therefore, once all the
vports are up after a hard reset (either during driver load a function
level reset), the core RDMA device info will be created. It is populated
with the function type (as distinguished by the IDC initialization
function pointer), the core idc_ops function points (just stubs for
now), the reserved RDMA MSIX table, and various other info the core RDMA
auxiliary driver will need. It is then plugged on to the bus.
During a function level reset or driver unload, the device will be
unplugged from the bus and destroyed.
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
Fetch the number of reserved RDMA vectors from the control plane.
Adjust the number of reserved LAN vectors if necessary. Adjust the
minimum number of vectors the OS should reserve to include RDMA; and
fail if the OS cannot reserve enough vectors for the minimum number of
LAN and RDMA vectors required. Create a separate msix table for the
reserved RDMA vectors, which will just get handed off to the RDMA core
device to do with what it will.
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
We only need to support version 1, 5 and 7.
Remove versions 2, 3, 4 and 6.
Reviewed-by: Pagadala Yesu Anjaneyulu <pagadala.yesu.anjaneyulu@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.10d91f675505.Idd3a6da568261ee738918f290168a2ddaa87196b@changeid
|
|
This are not used in any of our devices. Remove it.
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.89156be9bc7f.I5ff5c1055eaf4fef9bd73233ea4d95504634ceed@changeid
|
|
These are not used in any of our devices. Remove them.
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.dd784443be53.I4ff3b2392294f5df2625a71e2deee3364e9708f6@changeid
|
|
iwlmld was planned to be used for HR/GF, which has versions 5/6,
but it was decided at the end to use iwlmvm for HR/GF, so iwlmld only
needs to support version 8.
Remove versions 5 and 6 support.
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.9c64bfbb16cb.I109bee4d4bf455cbffbb8d2340023338bcab886d@changeid
|
|
when BT is ON"
Due to a hw bug, this feature won't be enabled. Revert its
implementation.
This reverts commit 37808a3788fd ("wifi: iwlwifi: mld: allow EMLSR with
2.4 GHz when BT is ON")
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.57755ac3f39d.I63ae0ee3e6cdc9b11175ad15927aaad3b8f8f47a@changeid
|
|
with bt on"
Due to a hw bug, this feature won't be enabled. Revert its tests.
This reverts commit f7cc80b871ee ("wifi: iwlwifi: mld: add kunit test
for emlsr with bt on")
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.5fdf77497ad2.I1160f1dcff734cb42baa8fbf8aac121a1a24a4c5@changeid
|
|
The firmware provides the station id, use it since it makes our lives
easier. No need to assume we have a single BSS vif, and look up the
station id to whom the OMI was sent.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.7d2cd878855f.I8625ebb2c4e1fb484aafd16a07549f2eeb506e08@changeid
|
|
iwlmld was planned to be used for HR/GF, which has version 4,
but it was decided at the end to use iwlmvm for HR/GF, so iwlmld only
needs to support version 5.
Remove version 4 support.
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.faeb1e6bac2a.I1a29b16f59b67c103d1f91dedee27e04cd7fdfdd@changeid
|
|
iwl_reduce_tx_power_cmd is not used anywhere, remove it.
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.313285673570.I87c646f8b9b83d63c7c6c293cc5d454c32d852c2@changeid
|
|
iwlmld was planned to be used for HR, which has version 9,
but it was decided at the end to use iwlmvm for HR, so iwlmld only
needs to support version 10.
Remove version 9 support.
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.aeeb617abfae.I05101972506180644c42be5096c1b2afa36c625a@changeid
|
|
These versions are no longer used in any of our devices. Remove them.
Reviewed-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.05fabbda0a2f.Id55eeb4f337eb52163621ca202d97a3539bf3f53@changeid
|
|
Implement a dump handler in the iwl_mvm operation mode to
collect firmware dump upon trigger from trans layer.
Signed-off-by: Pagadala Yesu Anjaneyulu <pagadala.yesu.anjaneyulu@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.366fc31fd551.I976cb17edd85a461043c7a4c7f4895bfaec9174a@changeid
|
|
When connected to an AP, the PHY will typically be tuned to
a higher bandwidth than the beacons are transmitted on, as
they are normally only transmitted on 20 MHz. This can mean
that another STA is simultaneously transmitting on another
channel of the higher bandwidth, and apparently this energy
may be taken into account by the PHY, resulting in elevated
energy readings.
To work around this, track the firmware's corrected beacon
energy data and replace the RSSI in beacons by that. The
replacement happens for all beacons received in the context
of the current MAC or link (depending on FW version), in
which case the filters will drop all else. For a scan, which
is only tuning to 20 MHz channels, the MAC/link ID will be
one that isn't found (the AUX ID 4), and no correction will
be done (nor is it needed.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.324bfe7027ff.I160f947e7aab30e0110a7019ed46186e57c3de14@changeid
|
|
Since the iwlmvm driver now only supports pre-MLO devices,
we no longer need to maintain an extra explicit link ID;
valid MAC IDs and link IDs are both in the range 0-3 and
the driver always has a 1:1 MAC/link correspondence. Thus,
simply use the MAC ID as the link ID as well.
This simplifies some further work because on RX the ID is
given but there is some confusion about which versions of
the firmware report MAC and which report link ID.
While at it, clarify iwl_mvm_handle_missed_beacons_notif()
code a bit so it doesn't look like an invalid vif pointer
is being used.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.005aa5fe34fe.Ib0c1187453f46ce49dc0f9f58907ee21f5b52634@changeid
|
|
EHT capable devices will only use iwlmld. So we can remove EMLSR code
from iwlmvm.
As part of removal, remove IWL_MVM_ESR_EXIT_FAIL_ENTRY EMLSR state.
Signed-off-by: Pagadala Yesu Anjaneyulu <pagadala.yesu.anjaneyulu@intel.com>
Signed-off-by: Miri Korenblit <miriam.rachel.korenblit@intel.com>
Link: https://patch.msgid.link/20250711183056.a69dc9c6ba49.I7f9fbc1f954b4c118625a4b8d51c72f3c84936da@changeid
|
|
This reverts commit 465b9ee0ee7bc268d7f261356afd6c4262e48d82.
Such notifications fit better into core or nfnetlink_hook code,
following the NFNL_MSG_HOOK_GET message format.
Signed-off-by: Phil Sutter <phil@nwl.cc>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Its a kernel implementation detail, at least at this time:
We can later decide to revert this patch if there is a compelling
reason, but then we should also remove the ifdef that prevents exposure
of ip_conntrack_status enum IPS_NAT_CLASH value in the uapi header.
Clash entries are not included in dumps (true for both old /proc
and ctnetlink) either. So for now exclude the clash bit when dumping.
Fixes: 7e5c6aa67e6f ("netfilter: nf_tables: add packets conntrack state to debug trace info")
Link: https://lore.kernel.org/netfilter-devel/aGwf3dCggwBlRKKC@strlen.de/
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
The selftest doesn't cover this error path:
scratch = *raw_cpu_ptr(m->scratch);
if (unlikely(!scratch)) { // here
cover this too.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Previous patch added a new clash resolution test case.
Also use this during conntrack resize stress test in addition
to icmp ping flood.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Add a dedicated test to exercise conntrack clash resolution path.
Test program emits 128 identical udp packets in parallel, then reads
back replies from socat echo server.
Also check (via conntrack -S) that the clash path was hit at least once.
Due to the racy nature of the test its possible that despite the
threaded program all packets were processed in-order or on same cpu,
emit a SKIP warning in this case.
Two tests are added:
- one to test the simpler, non-nat case
- one to exercise clash resolution where packets
might have different nat transformations attached to them.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Extend the resize test:
- continuously dump table both via /proc and ctnetlink interfaces while
table is resized in a loop.
- if socat is available, send udp packets in additon to ping requests.
- increase/decrease the icmp and udp timeouts while resizes are happening.
This makes sure we also exercise the 'ct has expired' check that happens
on conntrack lookup.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Running lwt_dst_cache_ref_loop.sh in selftest with KASAN triggers
the splat below [0].
rpl_do_srh_inline() fetches ipv6_hdr(skb) and accesses it after
skb_cow_head(), which is illegal as the header could be freed then.
Let's fix it by making oldhdr to a local struct instead of a pointer.
[0]:
[root@fedora net]# ./lwt_dst_cache_ref_loop.sh
...
TEST: rpl (input)
[ 57.631529] ==================================================================
BUG: KASAN: slab-use-after-free in rpl_do_srh_inline.isra.0 (net/ipv6/rpl_iptunnel.c:174)
Read of size 40 at addr ffff888122bf96d8 by task ping6/1543
CPU: 50 UID: 0 PID: 1543 Comm: ping6 Not tainted 6.16.0-rc5-01302-gfadd1e6231b1 #23 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
Call Trace:
<IRQ>
dump_stack_lvl (lib/dump_stack.c:122)
print_report (mm/kasan/report.c:409 mm/kasan/report.c:521)
kasan_report (mm/kasan/report.c:221 mm/kasan/report.c:636)
kasan_check_range (mm/kasan/generic.c:175 (discriminator 1) mm/kasan/generic.c:189 (discriminator 1))
__asan_memmove (mm/kasan/shadow.c:94 (discriminator 2))
rpl_do_srh_inline.isra.0 (net/ipv6/rpl_iptunnel.c:174)
rpl_input (net/ipv6/rpl_iptunnel.c:201 net/ipv6/rpl_iptunnel.c:282)
lwtunnel_input (net/core/lwtunnel.c:459)
ipv6_rcv (./include/net/dst.h:471 (discriminator 1) ./include/net/dst.h:469 (discriminator 1) net/ipv6/ip6_input.c:79 (discriminator 1) ./include/linux/netfilter.h:317 (discriminator 1) ./include/linux/netfilter.h:311 (discriminator 1) net/ipv6/ip6_input.c:311 (discriminator 1))
__netif_receive_skb_one_core (net/core/dev.c:5967)
process_backlog (./include/linux/rcupdate.h:869 net/core/dev.c:6440)
__napi_poll.constprop.0 (net/core/dev.c:7452)
net_rx_action (net/core/dev.c:7518 net/core/dev.c:7643)
handle_softirqs (kernel/softirq.c:579)
do_softirq (kernel/softirq.c:480 (discriminator 20))
</IRQ>
<TASK>
__local_bh_enable_ip (kernel/softirq.c:407)
__dev_queue_xmit (net/core/dev.c:4740)
ip6_finish_output2 (./include/linux/netdevice.h:3358 ./include/net/neighbour.h:526 ./include/net/neighbour.h:540 net/ipv6/ip6_output.c:141)
ip6_finish_output (net/ipv6/ip6_output.c:215 net/ipv6/ip6_output.c:226)
ip6_output (./include/linux/netfilter.h:306 net/ipv6/ip6_output.c:248)
ip6_send_skb (net/ipv6/ip6_output.c:1983)
rawv6_sendmsg (net/ipv6/raw.c:588 net/ipv6/raw.c:918)
__sys_sendto (net/socket.c:714 (discriminator 1) net/socket.c:729 (discriminator 1) net/socket.c:2228 (discriminator 1))
__x64_sys_sendto (net/socket.c:2231)
do_syscall_64 (arch/x86/entry/syscall_64.c:63 (discriminator 1) arch/x86/entry/syscall_64.c:94 (discriminator 1))
entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
RIP: 0033:0x7f68cffb2a06
Code: 5d e8 41 8b 93 08 03 00 00 59 5e 48 83 f8 fc 75 19 83 e2 39 83 fa 08 75 11 e8 26 ff ff ff 66 0f 1f 44 00 00 48 8b 45 10 0f 05 <48> 8b 5d f8 c9 c3 0f 1f 40 00 f3 0f 1e fa 55 48 89 e5 48 83 ec 08
RSP: 002b:00007ffefb7c53d0 EFLAGS: 00000202 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 0000564cd69f10a0 RCX: 00007f68cffb2a06
RDX: 0000000000000040 RSI: 0000564cd69f10a4 RDI: 0000000000000003
RBP: 00007ffefb7c53f0 R08: 0000564cd6a032ac R09: 000000000000001c
R10: 0000000000000000 R11: 0000000000000202 R12: 0000564cd69f10a4
R13: 0000000000000040 R14: 00007ffefb7c66e0 R15: 0000564cd69f10a0
</TASK>
Allocated by task 1543:
kasan_save_stack (mm/kasan/common.c:48)
kasan_save_track (mm/kasan/common.c:60 (discriminator 1) mm/kasan/common.c:69 (discriminator 1))
__kasan_slab_alloc (mm/kasan/common.c:319 mm/kasan/common.c:345)
kmem_cache_alloc_node_noprof (./include/linux/kasan.h:250 mm/slub.c:4148 mm/slub.c:4197 mm/slub.c:4249)
kmalloc_reserve (net/core/skbuff.c:581 (discriminator 88))
__alloc_skb (net/core/skbuff.c:669)
__ip6_append_data (net/ipv6/ip6_output.c:1672 (discriminator 1))
ip6_append_data (net/ipv6/ip6_output.c:1859)
rawv6_sendmsg (net/ipv6/raw.c:911)
__sys_sendto (net/socket.c:714 (discriminator 1) net/socket.c:729 (discriminator 1) net/socket.c:2228 (discriminator 1))
__x64_sys_sendto (net/socket.c:2231)
do_syscall_64 (arch/x86/entry/syscall_64.c:63 (discriminator 1) arch/x86/entry/syscall_64.c:94 (discriminator 1))
entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Freed by task 1543:
kasan_save_stack (mm/kasan/common.c:48)
kasan_save_track (mm/kasan/common.c:60 (discriminator 1) mm/kasan/common.c:69 (discriminator 1))
kasan_save_free_info (mm/kasan/generic.c:579 (discriminator 1))
__kasan_slab_free (mm/kasan/common.c:271)
kmem_cache_free (mm/slub.c:4643 (discriminator 3) mm/slub.c:4745 (discriminator 3))
pskb_expand_head (net/core/skbuff.c:2274)
rpl_do_srh_inline.isra.0 (net/ipv6/rpl_iptunnel.c:158 (discriminator 1))
rpl_input (net/ipv6/rpl_iptunnel.c:201 net/ipv6/rpl_iptunnel.c:282)
lwtunnel_input (net/core/lwtunnel.c:459)
ipv6_rcv (./include/net/dst.h:471 (discriminator 1) ./include/net/dst.h:469 (discriminator 1) net/ipv6/ip6_input.c:79 (discriminator 1) ./include/linux/netfilter.h:317 (discriminator 1) ./include/linux/netfilter.h:311 (discriminator 1) net/ipv6/ip6_input.c:311 (discriminator 1))
__netif_receive_skb_one_core (net/core/dev.c:5967)
process_backlog (./include/linux/rcupdate.h:869 net/core/dev.c:6440)
__napi_poll.constprop.0 (net/core/dev.c:7452)
net_rx_action (net/core/dev.c:7518 net/core/dev.c:7643)
handle_softirqs (kernel/softirq.c:579)
do_softirq (kernel/softirq.c:480 (discriminator 20))
__local_bh_enable_ip (kernel/softirq.c:407)
__dev_queue_xmit (net/core/dev.c:4740)
ip6_finish_output2 (./include/linux/netdevice.h:3358 ./include/net/neighbour.h:526 ./include/net/neighbour.h:540 net/ipv6/ip6_output.c:141)
ip6_finish_output (net/ipv6/ip6_output.c:215 net/ipv6/ip6_output.c:226)
ip6_output (./include/linux/netfilter.h:306 net/ipv6/ip6_output.c:248)
ip6_send_skb (net/ipv6/ip6_output.c:1983)
rawv6_sendmsg (net/ipv6/raw.c:588 net/ipv6/raw.c:918)
__sys_sendto (net/socket.c:714 (discriminator 1) net/socket.c:729 (discriminator 1) net/socket.c:2228 (discriminator 1))
__x64_sys_sendto (net/socket.c:2231)
do_syscall_64 (arch/x86/entry/syscall_64.c:63 (discriminator 1) arch/x86/entry/syscall_64.c:94 (discriminator 1))
entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
The buggy address belongs to the object at ffff888122bf96c0
which belongs to the cache skbuff_small_head of size 704
The buggy address is located 24 bytes inside of
freed 704-byte region [ffff888122bf96c0, ffff888122bf9980)
The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x122bf8
head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0x200000000000040(head|node=0|zone=2)
page_type: f5(slab)
raw: 0200000000000040 ffff888101fc0a00 ffffea000464dc00 0000000000000002
raw: 0000000000000000 0000000080270027 00000000f5000000 0000000000000000
head: 0200000000000040 ffff888101fc0a00 ffffea000464dc00 0000000000000002
head: 0000000000000000 0000000080270027 00000000f5000000 0000000000000000
head: 0200000000000003 ffffea00048afe01 00000000ffffffff 00000000ffffffff
head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
ffff888122bf9580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888122bf9600: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
>ffff888122bf9680: fc fc fc fc fc fc fc fc fa fb fb fb fb fb fb fb
^
ffff888122bf9700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888122bf9780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
Fixes: a7a29f9c361f8 ("net: ipv6: add rpl sr tunnel")
Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We default to raising an exception when unknown attrs are found
to make sure those are noticed during development.
When YNL CLI is "installed" and used by sysadmins erroring out
is not going to be helpful. It's far more likely the user space
is older than the kernel in that case, than that some attr is
misdefined or missing.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Donald Hunter <donald.hunter@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
'struct regmap_config' are not modified in these drivers. They be
statically defined instead of allocated and populated at run-time.
The main benefits are:
- it saves some memory at runtime
- the structures can be declared as 'const', which is always better for
structures that hold some function pointers
- the code is less verbose
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Daniel Golle <daniel@makrotopia.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux
Fixes for a few clk drivers and bindings:
- Add a missing property to the Mediatek MT8188 clk binding to
keep binding checks happy
- Avoid an OOB by setting the correct number of parents in
dispmix_csr_clk_dev_data
- Allocate clk_hw structs early in probe to avoid an ordering
issue where clk_parent_data points to an unallocated clk_hw
when the child clk is registered before the parent clk in the
SCMI clk driver
* tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
dt-bindings: clock: mediatek: Add #reset-cells property for MT8188
clk: imx: Fix an out-of-bounds access in dispmix_csr_clk_dev_data
clk: scmi: Handle case where child clocks are initialized before their parents
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Borislav Petkov:
- Update Kirill's email address
- Allow hugetlb PMD sharing only on 64-bit as it doesn't make a whole
lotta sense on 32-bit
- Add fixes for a misconfigured AMD Zen2 client which wasn't even
supposed to run Linux
* tag 'x86_urgent_for_v6.16_rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
MAINTAINERS: Update Kirill Shutemov's email address for TDX
x86/mm: Disable hugetlb page table sharing on 32-bit
x86/CPU/AMD: Disable INVLPGB on Zen2
x86/rdrand: Disable RDSEED on AMD Cyan Skillfish
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq fixes from Borislav Petkov:
- Fix a case of recursive locking in the MSI code
- Fix a randconfig build failure in armada-370-xp irqchip
* tag 'irq_urgent_for_v6.16_rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/irq-msi-lib: Fix build with PCI disabled
PCI/MSI: Prevent recursive locking in pci_msix_write_tph_tag()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fix from Borislav Petkov:
- Prevent perf_sigtrap() from observing an exiting task and warning
about it
* tag 'perf_urgent_for_v6.16_rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/core: Fix WARN in perf_sigtrap()
|
|
Extend the process_unknown handing to enum values and flags.
Tested by removing entries from rt-link.yaml and rt-neigh.yaml:
./tools/net/ynl/pyynl/cli.py --family rt-link --dump getlink \
--process-unknown --output-json | jq '.[0] | ."ifi-flags"'
[
"up",
"Unknown(6)",
"loopback",
"Unknown(16)"
]
./tools/net/ynl/pyynl/cli.py --family rt-neigh --dump getneigh \
--process-unknown --output-json | jq '.[] | ."ndm-type"'
"unicast"
"Unknown(5)"
"Unknown(5)"
"unicast"
"Unknown(5)"
"unicast"
"broadcast"
Signed-off-by: Donald Hunter <donald.hunter@gmail.com>
Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|