Age | Commit message (Collapse) | Author |
|
When we have less than PAGE_SIZE of data on receive queue,
we set recv_skip_hint to 0. Instead, set it to the actual
number of bytes available.
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The initial session number when a link is created is based on a random
value, taken from struct tipc_net->random. It is then incremented for
each link reset to avoid mixing protocol messages from different link
sessions.
However, when a bearer is reset all its links are deleted, and will
later be re-created using the same random value as the first time.
This means that if the link never went down between creation and
deletion we will still sometimes have two subsequent sessions with
the same session number. In virtual environments with potentially
long transmission times this has turned out to be a real problem.
We now fix this by randomizing the session number each time a link
is created.
With a session number size of 16 bits this gives a risk of session
collision of 1/64k. To reduce this further, we also introduce a sanity
check on the very first STATE message arriving at a link. If this has
an acknowledge value differing from 0, which is logically impossible,
we ignore the message. The final risk for session collision is hence
reduced to 1/4G, which should be sufficient.
Signed-off-by: LUU Duc Canh <canh.d.luu@dektech.com.au>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If "td->u.target_size" is larger than sizeof(struct xt_entry_target) we
return -EINVAL. But we don't check whether it's smaller than
sizeof(struct xt_entry_target) and that could lead to an out of bounds
read.
Fixes: 7ba699c604ab ("[NET_SCHED]: Convert actions from rtnetlink to new netlink API")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next
Steffen Klassert says:
====================
pull request (net-next): ipsec-next 2018-10-01
1) Make xfrmi_get_link_net() static to silence a sparse warning.
From Wei Yongjun.
2) Remove a unused esph pointer definition in esp_input().
From Haishuang Yan.
3) Allow the NIC driver to quietly refuse xfrm offload
in case it does not support it, the SA is created
without offload in this case.
From Shannon Nelson.
Please pull or let me know if there are problems.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec
Steffen Klassert says:
====================
pull request (net): ipsec 2018-10-01
1) Validate address prefix lengths in the xfrm selector,
otherwise we may hit undefined behaviour in the
address matching functions if the prefix is too
big for the given address family.
2) Fix skb leak on local message size errors.
From Thadeu Lima de Souza Cascardo.
3) We currently reset the transport header back to the network
header after a transport mode transformation is applied. This
leads to an incorrect transport header when multiple transport
mode transformations are applied. Reset the transport header
only after all transformations are already applied to fix this.
From Sowmini Varadhan.
4) We only support one offloaded xfrm, so reset crypto_done after
the first transformation in xfrm_input(). Otherwise we may call
the wrong input method for subsequent transformations.
From Sowmini Varadhan.
5) Fix NULL pointer dereference when skb_dst_force clears the dst_entry.
skb_dst_force does not really force a dst refcount anymore, it might
clear it instead. xfrm code did not expect this, add a check to not
dereference skb_dst() if it was cleared by skb_dst_force.
6) Validate xfrm template mode, otherwise we can get a stack-out-of-bounds
read in xfrm_state_find. From Sean Tranchetti.
Please pull or let me know if there are problems.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Previously receiver buffer auto-tuning starts after receiving
one advertised window amount of data. After the initial receiver
buffer was raised by patch a337531b942b ("tcp: up initial rmem to
128KB and SYN rwin to around 64KB"), the reciver buffer may take
too long to start raising. To address this issue, this patch lowers
the initial bytes expected to receive roughly the expected sender's
initial window.
Fixes: a337531b942b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Wei Wang <weiwan@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In normal SYN processing, packets are handled without listener
lock and in RCU protected ingress path.
But syzkaller is known to be able to trick us and SYN
packets might be processed in process context, after being
queued into socket backlog.
In commit 06f877d613be ("tcp/dccp: fix other lockdep splats
accessing ireq_opt") I made a very stupid fix, that happened
to work mostly because of the regular path being RCU protected.
Really the thing protecting ireq->ireq_opt is RCU read lock,
and the pseudo request refcnt is not relevant.
This patch extends what I did in commit 449809a66c1d ("tcp/dccp:
block BH for SYN processing") by adding an extra rcu_read_{lock|unlock}
pair in the paths that might be taken when processing SYN from
socket backlog (thus possibly in process context)
Fixes: 06f877d613be ("tcp/dccp: fix other lockdep splats accessing ireq_opt")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pablo Neira Ayuso says:
====================
Netfilter fixes for net
The following patchset contains Netfilter fixes for your net tree:
1) Skip ip_sabotage_in() for packet making into the VRF driver,
otherwise packets are dropped, from David Ahern.
2) Clang compilation warning uncovering typo in the
nft_validate_register_store() call from nft_osf, from Stefan Agner.
3) Double sizeof netlink message length calculations in ctnetlink,
from zhong jiang.
4) Missing rb_erase() on batch full in rbtree garbage collector,
from Taehee Yoo.
5) Calm down compilation warning in nf_hook(), from Florian Westphal.
6) Missing check for non-null sk in xt_socket before validating
netns procedence, from Flavio Leitner.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In order to introduce per-cpu cgroup storage, let's generalize
bpf cgroup core to support multiple cgroup storage types.
Potentially, per-node cgroup storage can be added later.
This commit is mostly a formal change that replaces
cgroup_storage pointer with a array of cgroup_storage pointers.
It doesn't actually introduce a new storage type,
it will be done later.
Each bpf program is now able to have one cgroup storage of each type.
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
|
|
reg_process_hint_country_ie() can free regulatory_request and return
REG_REQ_ALREADY_SET. We shouldn't use regulatory_request after it's
called. KASAN error was observed when this happens.
BUG: KASAN: use-after-free in reg_process_hint+0x839/0x8aa [cfg80211]
Read of size 4 at addr ffff8800c430d434 by task kworker/1:3/89
<snipped>
Workqueue: events reg_todo [cfg80211]
Call Trace:
dump_stack+0xc1/0x10c
? _atomic_dec_and_lock+0x1ad/0x1ad
? _raw_spin_lock_irqsave+0xa0/0xd2
print_address_description+0x86/0x26f
? reg_process_hint+0x839/0x8aa [cfg80211]
kasan_report+0x241/0x29b
reg_process_hint+0x839/0x8aa [cfg80211]
reg_todo+0x204/0x5b9 [cfg80211]
process_one_work+0x55f/0x8d0
? worker_detach_from_pool+0x1b5/0x1b5
? _raw_spin_unlock_irq+0x65/0xdd
? _raw_spin_unlock_irqrestore+0xf3/0xf3
worker_thread+0x5dd/0x841
? kthread_parkme+0x1d/0x1d
kthread+0x270/0x285
? pr_cont_work+0xe3/0xe3
? rcu_read_unlock_sched_notrace+0xca/0xca
ret_from_fork+0x22/0x40
Allocated by task 2718:
set_track+0x63/0xfa
__kmalloc+0x119/0x1ac
regulatory_hint_country_ie+0x38/0x329 [cfg80211]
__cfg80211_connect_result+0x854/0xadd [cfg80211]
cfg80211_rx_assoc_resp+0x3bc/0x4f0 [cfg80211]
smsc95xx v1.0.6
ieee80211_sta_rx_queued_mgmt+0x1803/0x7ed5 [mac80211]
ieee80211_iface_work+0x411/0x696 [mac80211]
process_one_work+0x55f/0x8d0
worker_thread+0x5dd/0x841
kthread+0x270/0x285
ret_from_fork+0x22/0x40
Freed by task 89:
set_track+0x63/0xfa
kasan_slab_free+0x6a/0x87
kfree+0xdc/0x470
reg_process_hint+0x31e/0x8aa [cfg80211]
reg_todo+0x204/0x5b9 [cfg80211]
process_one_work+0x55f/0x8d0
worker_thread+0x5dd/0x841
kthread+0x270/0x285
ret_from_fork+0x22/0x40
<snipped>
Signed-off-by: Yu Zhao <yuzhao@google.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
key->sta is only valid after ieee80211_key_link, which is called later
in this function. Because of that, the IEEE80211_KEY_FLAG_RX_MGMT is
never set when management frame protection is enabled.
Fixes: e548c49e6dc6b ("mac80211: add key flag for management keys")
Cc: stable@vger.kernel.org
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
cfg80211_wext_giwrate and sinfo.pertid might allocate sinfo.pertid via
rdev_get_station(), but never release it. Fix that.
Fixes: 8689c051a201 ("cfg80211: dynamically allocate per-tid stats for station info")
Signed-off-by: Stefan Seyfried <seife+kernel@b1-systems.com>
[johannes: fix error path, use cfg80211_sinfo_release_content(), add Fixes]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Use RCU protected lookups for discovering the supported mechanisms.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Module removal is RCU safe by design, so we really have no need to
lock the auth_flavors[] array. Substitute a lockless scheme to
add/remove entries in the array, and then use rcu.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
It is no longer used outside of net/sunrpc/socklib.c
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Simplify the retry logic.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
In preparation for sharing with AF_LOCAL.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Most of this code should also be reusable with other socket types.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Add a bvec array to struct xdr_buf, and have the client allocate it
when we need to receive data into pages.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
If the RPC call relies on the receive call allocating pages as buffers,
then let's label it so that we
a) Don't leak memory by allocating pages for requests that do not expect
this behaviour
b) Can optimise for the common case where calls do not require allocation.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
We no longer need priority semantics on the xprt->sending queue, because
the order in which tasks are sent is now dictated by their position in
the send queue.
Note that the backlog queue remains a priority queue, meaning that
slot resources are still managed in order of task priority.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Fix up the priority queue to not batch by owner, but by queue, so that
we allow '1 << priority' elements to be dequeued before switching to
the next priority queue.
The owner field is still used to wake up requests in round robin order
by owner to avoid single processes hogging the RPC layer by loading the
queues.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
If the server is slow, we can find ourselves with quite a lot of entries
on the receive queue. Converting the search from an O(n) to O(log(n))
can make a significant difference, particularly since we have to hold
a number of locks while searching.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Treat socket write space handling in the same way we now treat transport
congestion: by denying the XPRT_LOCK until the transport signals that it
has free buffer space.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
The theory was that we would need to grab the socket lock anyway, so we
might as well use it to gate the allocation of RPC slots for a TCP
socket.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
This no longer causes them to lose their place in the transmission queue.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Rather than forcing each and every RPC task to grab the socket write
lock in order to send itself, we allow whichever task is holding the
write lock to attempt to drain the entire transmit queue.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Avoid memory starvation by giving RPCs that are tagged with the
RPC_TASK_SWAPPER flag the highest priority.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Both RDMA and UDP transports require the request to get a "congestion control"
credit before they can be transmitted. Right now, this is done when
the request locks the socket. We'd like it to happen when a request attempts
to be transmitted for the first time.
In order to support retransmission of requests that already hold such
credits, we also want to ensure that they get queued first, so that we
don't deadlock with requests that have yet to obtain a credit.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
One of the intentions with the priority queues was to ensure that no
single process can hog the transport. The field task->tk_owner therefore
identifies the RPC call's origin, and is intended to allow the RPC layer
to organise queues for fairness.
This commit therefore modifies the transmit queue to group requests
by task->tk_owner, and ensures that we round robin among those groups.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Remove the checks for whether or not we need to transmit, and whether
or not a reply has been received. Those are already handled in
call_transmit() itself.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
If the request is still on the queue, this will be incorrect behaviour.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
When we shift to using the transmit queue, then the task that holds the
write lock will not necessarily be the same as the one being transmitted.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Fix up the back channel code to recognise that it has already been
transmitted, so does not need to be called again.
Also ensure that we set req->rq_task.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Move the call encoding so that it occurs before the transport connection
etc.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Add the queue that will enforce the ordering of RPC task transmission.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
When storing a struct rpc_rqst on the slot allocation list, we currently
use the same field 'rq_list' as we use to store the request on the
receive queue. Since the structure is never on both lists at the same
time, this is OK.
However, for clarity, let's make that a union with different names for
the different lists so that we can more easily distinguish between
the two states.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Allow the caller in clnt.c to call into the code to wait for a reply
after calling xprt_transmit(). Again, the reason is that the backchannel
code does not need this functionality.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Separate out the action of adding a request to the reply queue so that the
backchannel code can simply skip calling it altogether.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
We will use the same lock to protect both the transmit and receive queues.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Rather than waking up the entire queue of RPC messages a second time,
just wake up the task that was put to sleep.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
When asked to wake up an RPC task, it makes sense to test whether or not
the task is still queued.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Add a helper that will wake up a task that is sleeping on a specific
queue, and will set the value of task->tk_status. This is mainly
intended for use by the transport layer to notify the task of an
error condition.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
We are going to need to pin for both send and receive.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|