Age | Commit message (Collapse) | Author |
|
Add device tree binding documentation details for Cirrus Logic
CS8900/CS8920 ethernet chip.
Signed-off-by: Alexander Shiyan <shc_work@mail.ru>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add DT support to the Cirrus Logic CS89x0 driver.
Signed-off-by: Alexander Shiyan <shc_work@mail.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Maciej Żenczykowski reported lockdep warning a spinlock
was not registered before being held in mlx4_cmd_wake_completions()
cmd.context_lock initialization is not at the right place.
1) mlx4_cmd_use_events() can be called multiple times.
Calling spin_lock_init() on a live spinlock can lead
to hangs.
2) mlx4_cmd_wake_completions() can be called while lock
has not been initialized.
Lockdep complains, and current logic is not race prone.
It seems better to move the initialization earlier in
mlx4_load_one()
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Maciej Żenczykowski <maze@google.com>
Cc: Eugenia Emantayev <eugenia@mellanox.com>
Cc: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In case of error, the function devm_kzalloc() returns NULL pointer
not ERR_PTR(). The IS_ERR() test in the return value check should
be replaced with NULL test.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Acked-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
|
|
Rework the local RxRPC endpoint management.
Local endpoint objects are maintained in a flat list as before. This
should be okay as there shouldn't be more than one per open AF_RXRPC socket
(there can be fewer as local endpoints can be shared if their local service
ID is 0 and they share the same local transport parameters).
Changes:
(1) Local endpoints may now only be shared if they have local service ID 0
(ie. they're not being used for listening).
This prevents a scenario where process A is listening of the Cache
Manager port and process B contacts a fileserver - which may then
attempt to send CM requests back to B. But if A and B are sharing a
local endpoint, A will get the CM requests meant for B.
(2) We use a mutex to handle lookups and don't provide RCU-only lookups
since we only expect to access the list when opening a socket or
destroying an endpoint.
The local endpoint object is pointed to by the transport socket's
sk_user_data for the life of the transport socket - allowing us to
refer to it directly from the sk_data_ready and sk_error_report
callbacks.
(3) atomic_inc_not_zero() now exists and can be used to only share a local
endpoint if the last reference hasn't yet gone.
(4) We can remove rxrpc_local_lock - a spinlock that had to be taken with
BH processing disabled given that we assume sk_user_data won't change
under us.
(5) The transport socket is shut down before we clear the sk_user_data
pointer so that we can be sure that the transport socket's callbacks
won't be invoked once the RCU destruction is scheduled.
(6) Local endpoints have a work item that handles both destruction and
event processing. The means that destruction doesn't then need to
wait for event processing. The event queues can then be cleared after
the transport socket is shut down.
(7) Local endpoints are no longer available for resurrection beyond the
life of the sockets that had them open. As soon as their last ref
goes, they are scheduled for destruction and may not have their usage
count moved from 0.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Separate local endpoint event handling out into its own file preparatory to
overhauling the object management aspect (which remains in the original
file).
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
The spec allows backchannels for multiple clients to share the same tcp
connection. When that happens, we need to use the same xprt for all of
them. Similarly, we need the same xps.
This fixes list corruption introduced by the multipath code.
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Acked-by: Trond Myklebust <trondmy@primarydata.com>
|
|
Also simplify the logic a bit.
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Acked-by: Trond Myklebust <trondmy@primarydata.com>
|
|
Callers of rpc_create_xprt expect it to put the xprt on success and
failure.
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Acked-by: Trond Myklebust <trondmy@primarydata.com>
|
|
Fix a regression when creating a file over a whiteout. The new
file/directory needs to use the current fsuid/fsgid, not the ones from the
mounter's credentials.
The refcounting is a bit tricky: prepare_creds() sets an original refcount,
override_creds() gets one more, which revert_cred() drops. So
1) we need to expicitly put the mounter's credentials when overriding
with the updated one
2) we need to put the original ref to the updated creds (and this can
safely be done before revert_creds(), since we'll still have the ref
from override_creds()).
Reported-by: Stephen Smalley <sds@tycho.nsa.gov>
Fixes: 3fe6e52f0626 ("ovl: override creds with the ones from the superblock mounter")
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
|
|
Debugfs' open_proxy_open(), the ->open() installed at all inodes created
through debugfs_create_file_unsafe(),
- grabs a reference to the original file_operations instance passed to
debugfs_create_file_unsafe() via fops_get(),
- installs it at the file's ->f_op by means of replace_fops()
- and calls fops_put() on it.
Since the semantics of replace_fops() are such that the reference's
ownership is transferred, the subsequent fops_put() will result in a double
release when the file is eventually closed.
Currently, this is not an issue since fops_put() basically does a
module_put() on the file_operations' ->owner only and there don't exist any
modules calling debugfs_create_file_unsafe() yet. This is expected to
change in the future though, c.f. commit c64688081490 ("debugfs: add
support for self-protecting attribute file fops").
Remove the call to fops_put() from open_proxy_open().
Fixes: 9fd4dcece43a ("debugfs: prevent access to possibly dead
file_operations at file open")
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Debugfs' full_proxy_open(), the ->open() installed at all inodes created
through debugfs_create_file(),
- grabs a reference to the original struct file_operations instance passed
to debugfs_create_file(),
- dynamically allocates a proxy struct file_operations instance wrapping
the original
- and installs this at the file's ->f_op.
Afterwards, it calls the original ->open() and passes its return value back
to the VFS layer.
Now, if that return value indicates failure, the VFS layer won't ever call
->release() and thus, neither the reference to the original file_operations
nor the memory for the proxy file_operations will get released, i.e. both
are leaked.
Upon failure of the original fops' ->open(), undo the proxy installation.
That is:
- Set the struct file ->f_op to what it had been when full_proxy_open()
was entered.
- Drop the reference to the original file_operations.
- Free the memory holding the proxy file_operations.
Fixes: 49d200deaa68 ("debugfs: prevent access to removed files' private
data")
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Since commit 49d200deaa68 ("debugfs: prevent access to removed files'
private data"), a debugfs file's file_operations methods get proxied
through lifetime aware wrappers.
However, only a certain subset of the file_operations members is supported
by debugfs and ->mmap isn't among them -- it appears to be NULL from the
VFS layer's perspective.
This behaviour breaks the /sys/kernel/debug/kcov file introduced
concurrently with commit 5c9a8750a640 ("kernel: add kcov code coverage").
Since that file never gets removed, there is no file removal race and thus,
a lifetime checking proxy isn't needed.
Avoid the proxying for /sys/kernel/debug/kcov by creating it via
debugfs_create_file_unsafe() rather than debugfs_create_file().
Fixes: 49d200deaa68 ("debugfs: prevent access to removed files' private data")
Fixes: 5c9a8750a640 ("kernel: add kcov code coverage")
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
In rare cases it is possible for s_flags & MS_RDONLY to be set but
MNT_READONLY to be clear. This starting combination can cause
fs_fully_visible to fail to ensure that the new mount is readonly.
Therefore force MNT_LOCK_READONLY in the new mount if MS_RDONLY
is set on the source filesystem of the mount.
In general both MS_RDONLY and MNT_READONLY are set at the same for
mounts so I don't expect any programs to care. Nor do I expect
MS_RDONLY to be set on proc or sysfs in the initial user namespace,
which further decreases the likelyhood of problems.
Which means this change should only affect system configurations by
paranoid sysadmins who should welcome the additional protection
as it keeps people from wriggling out of their policies.
Cc: stable@vger.kernel.org
Fixes: 8c6cf9cc829f ("mnt: Modify fs_fully_visible to deal with locked ro nodev and atime")
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
|
|
Rather than wait until we observe the lock being free (which might never
happen), we can also return from spin_unlock_wait if we observe that the
lock is now held by somebody else, which implies that it was unlocked
but we just missed seeing it in that state.
Furthermore, in such a scenario there is no longer a need to write back
the value that we loaded, since we know that there has been a lock
hand-off, which is sufficient to publish any stores prior to the
unlock_wait because the ARm architecture ensures that a Store-Release
instruction is multi-copy atomic when observed by a Load-Acquire
instruction.
The litmus test is something like:
AArch64
{
0:X1=x; 0:X3=y;
1:X1=y;
2:X1=y; 2:X3=x;
}
P0 | P1 | P2 ;
MOV W0,#1 | MOV W0,#1 | LDAR W0,[X1] ;
STR W0,[X1] | STLR W0,[X1] | LDR W2,[X3] ;
DMB SY | | ;
LDR W2,[X3] | | ;
exists
(0:X2=0 /\ 2:X0=1 /\ 2:X2=0)
where P0 is doing spin_unlock_wait, P1 is doing spin_unlock and P2 is
doing spin_lock.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
nft_genmask_cur has already done left-shift operator on the gencursor,
so there's no need to do left-shift operator on it again.
Fixes: ea4bd995b0f2 ("netfilter: nf_tables: add transaction helper functions")
Cc: Patrick McHardy <kaber@trash.net>
Signed-off-by: Liping Zhang <liping.zhang@spreadtrum.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
When we add a nft rule like follows:
# nft add rule filter test tcp dport vmap {1: jump test}
-ELOOP error will be returned, and the anonymous set will be
destroyed.
But after that, nf_tables_abort will also try to remove the
element and destroy the set, which was already destroyed and
freed.
If we add a nft wrong rule, nft_tables_abort will do the cleanup
work rightly, so nf_tables_set_destroy call here is redundant and
wrong, remove it.
Signed-off-by: Liping Zhang <liping.zhang@spreadtrum.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Liping Zhang says:
"Users may add such a wrong nft rules successfully, which will cause an
endless jump loop:
# nft add rule filter test tcp dport vmap {1: jump test}
This is because before we commit, the element in the current anonymous
set is inactive, so osp->walk will skip this element and miss the
validate check."
To resolve this problem, this patch passes the generation mask to the
walk function through the iter container structure depending on the code
path:
1) If we're dumping the elements, then we have to check if the element
is active in the current generation. Thus, we check for the current
bit in the genmask.
2) If we're checking for loops, then we have to check if the element is
active in the next generation, as we're in the middle of a
transaction. Thus, we check for the next bit in the genmask.
Based on original patch from Liping Zhang.
Reported-by: Liping Zhang <liping.zhang@spreadtrum.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Tested-by: Liping Zhang <liping.zhang@spreadtrum.com>
|
|
We should check "i" is used as a dictionary or not, "binding" is already
checked before.
Signed-off-by: Liping Zhang <liping.zhang@spreadtrum.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
I forgot to move the kmem_cache_destroy into the exit path.
Fixes: 0c5366b3a8c7 ("netfilter: conntrack: use single slab cache)
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
rk_iommu_command() takes a struct rk_iommu and iterates over the slave
MMUs, so this is doubly wrong in that we're passing in the wrong pointer
and talking to MMUs that we shouldn't be.
Fixes: cd6438c5f844 ("iommu/rockchip: Reconstruct to support multi slaves")
Cc: stable@vger.kernel.org
Signed-off-by: John Keeping <john@metanate.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Since d16e0faab91 (iommu: Allow selecting page sizes per domain) the
iommu core demands the page size to be set per domain, otherwise any
mapping attempts will be dropped. Make sure to set a valid page size
for the etnaviv iommu.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Use the peer record to distribute network errors rather than the transport
object (which I want to get rid of). An error from a particular peer
terminates all calls on that peer.
For future consideration:
(1) For ICMP-induced errors it might be worth trying to extract the RxRPC
header from the offending packet, if one is returned attached to the
ICMP packet, to better direct the error.
This may be overkill, though, since an ICMP packet would be expected
to be relating to the destination port, machine or network. RxRPC
ABORT and BUSY packets give notice at RxRPC level.
(2) To also abort connection-level communications (such as CHALLENGE
packets) where indicted by an error - but that requires some revamping
of the connection event handling first.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Do a little bit of tidying in the ICMP processing code.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Don't assume anything about the address in an ICMP packet in
rxrpc_error_report() as the address may not be IPv4 in future, especially
since we're just printing these details.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Break MTU determination from ICMP out into its own function to reduce the
complexity of the error report handler.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Rename rxrpc_UDP_error_report() to rxrpc_error_report() as it might get
called for something other than UDP.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Rework peer object handling to use a hash table instead of a flat list and
to use RCU. Peer objects are no longer destroyed by passing them to a
workqueue to process, but rather are just passed to the RCU garbage
collector as kfree'able objects.
The hash function uses the local endpoint plus all the components of the
remote address, except for the RxRPC service ID. Peers thus represent a
UDP port on the remote machine as contacted by a UDP port on this machine.
The RCU read lock is used to handle non-creating lookups so that they can
be called from bottom half context in the sk_error_report handler without
having to lock the hash table against modification.
rxrpc_lookup_peer_rcu() *does* take a reference on the peer object as in
the future, this will be passed to a work item for error distribution in
the error_report path and this function will cease being used in the
data_ready path.
Creating lookups are done under spinlock rather than mutex as they might be
set up due to an external stimulus if the local endpoint is a server.
Captured network error messages (ICMP) are handled with respect to this
struct and MTU size and RTT are cached here.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Commit d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against
concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under
the premise that the LSE atomics (in particular, the LDADDA instruction)
are indivisible.
Unfortunately, these instructions are only indivisible when used with the
-AL (full ordering) suffix and, consequently, the same issue can
theoretically be observed with LSE atomics, where a later (in program
order) load can be speculated before the write portion of the atomic
operation.
This patch fixes the issue by performing a CAS of the lock once we've
established that it's unlocked, in much the same way as the LL/SC code.
Fixes: d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
spin_is_locked has grown two very different use-cases:
(1) [The sane case] API functions may require a certain lock to be held
by the caller and can therefore use spin_is_locked as part of an
assert statement in order to verify that the lock is indeed held.
For example, usage of assert_spin_locked.
(2) [The insane case] There are two locks, where a CPU takes one of the
locks and then checks whether or not the other one is held before
accessing some shared state. For example, the "optimized locking" in
ipc/sem.c.
In the latter case, the sequence looks like:
spin_lock(&sem->lock);
if (!spin_is_locked(&sma->sem_perm.lock))
/* Access shared state */
and requires that the spin_is_locked check is ordered after taking the
sem->lock. Unfortunately, since our spinlocks are implemented using a
LDAXR/STXR sequence, the read of &sma->sem_perm.lock can be speculated
before the STXR and consequently return a stale value.
Whilst this hasn't been seen to cause issues in practice, PowerPC fixed
the same issue in 51d7d5205d33 ("powerpc: Add smp_mb() to
arch_spin_is_locked()") and, although we did something similar for
spin_unlock_wait in d86b8da04dfa ("arm64: spinlock: serialise
spin_unlock_wait against concurrent lockers") that doesn't actually take
care of ordering against local acquisition of a different lock.
This patch adds an smp_mb() to the start of our arch_spin_is_locked and
arch_spin_unlock_wait routines to ensure that the lock value is always
loaded after any other locks have been taken by the current CPU.
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
There is a problem in the non-devicetree PMU probing where some
probe functions may get the number of supported events through
smp_call_function_any() using the arm_pmu supported_cpus mask.
But at the time the probe function is called, the supported_cpus
mask is empty so the call fails. This patch makes sure the mask
is set before calling the init function rather than after.
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
If USB cable is connected prior to boot, we don't get any interrupts
so we must manually check the VBUS state and report it during probe.
If we don't do it then USB controller will never know that peripheral
cable was connected till the user unplugs and replugs the cable.
Fixes: b7aad8e2685b ("extcon: palmas: Add the support for VBUS detection by using GPIO")
Cc: stable@vger.kernel.org
Signed-off-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
|
|
This function is just ->init(), rename it to make it obvious.
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
These should be gone when we removed CONFIG_NET_CLS_POLICE.
We can not totally remove them since they are exposed
to userspace.
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
* 'linux-4.7' of git://github.com/skeggsb/linux:
drm/nouveau/iccsense: fix memory leak
drm/nouveau/Revert "drm/nouveau/device/pci: set as non-CPU-coherent on ARM64"
|
|
Sowmini Varadhan says:
====================
RDS: multiple connection paths for scaling
Today RDS-over-TCP is implemented by demux-ing multiple PF_RDS sockets
between any 2 endpoints (where endpoint == [IP address, port]) over a
single TCP socket between the 2 IP addresses involved. This has the
limitation that it ends up funneling multiple RDS flows over a single
TCP flow, thus the rds/tcp connection is
(a) upper-bounded to the single-flow bandwidth,
(b) suffers from head-of-line blocking for the RDS sockets.
Better throughput (for a fixed small packet size, MTU) can be achieved
by having multiple TCP/IP flows per rds/tcp connection, i.e., multipathed
RDS (mprds). Each such TCP/IP flow constitutes a path for the rds/tcp
connection. RDS sockets will be attached to a path based on some hash
(e.g., of local address and RDS port number) and packets for that RDS
socket will be sent over the attached path using TCP to segment/reassemble
RDS datagrams on that path.
The table below, generated using a prototype that implements mprds,
shows that this is significant for scaling to 40G. Packet sizes
used were: 8K byte req, 256 byte resp. MTU: 1500. The parameters for
RDS-concurrency used below are described in the rds-stress(1) man page-
the number listed is proportional to the number of threads at which max
throughput was attained.
-------------------------------------------------------------------
RDS-concurrency Num of tx+rx K/s (iops) throughput
(-t N -d N) TCP paths
-------------------------------------------------------------------
16 1 600K - 700K 4 Gbps
28 8 5000K - 6000K 32 Gbps
-------------------------------------------------------------------
FAQ: what is the relation between mprds and mptcp?
mprds is orthogonal to mptcp. Whereas mptcp creates
sub-flows for a single TCP connection, mprds parallelizes tx/rx
at the RDS layer. MPRDS with N paths will allow N datagrams to
be sent in parallel; each path will continue to send one
datagram at a time, with sender and receiver keeping track of
the retransmit and dgram-assembly state based on the RDS header.
If desired, mptcp can additionally be used to speed up each TCP
path. That acceleration is orthogonal to the parallelization benefits
of mprds.
This patch series lays down the foundational data-structures to support
mprds in the kernel. It implements the changes to split up the
rds_connection structure into a common (to all paths) part,
and a per-path rds_conn_path. All I/O workqs are driven from
the rds_conn_path.
Note that this patchset does not (yet) actually enable multipathing
for any of the transports; all transports will continue to use a
single path with the refactored data-structures. A subsequent patchset
will add the changes to the rds-tcp module to actually use mprds
in rds-tcp.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Refactor rds_conn_destroy() so that the per-path dismantling
is done in rds_conn_path_destroy, and then iterate as needed
over rds_conn_path_destroy().
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This commit changes rds_conn_shutdown to take a rds_conn_path *
argument, allowing it to shutdown paths other than c_path[0] for
MP-capable transports.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add a for() loop in __rds_conn_create to initialize all the
conn_paths, in preparate for MP capable transports.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
rds_conn_path_error() is the MP-aware analog of rds_conn_error,
to be used by multipath-capable callers.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This commit updates the callbacks related to the rds-info command
so that they walk through all the rds_conn_path structures and
report the requested info.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
rds_conn_path_connect_if_down() works on the rds_conn_path
that it is passed. Callers who are not t_m_capable may continue
calling rds_conn_connect_if_down, which will invoke
rds_conn_path_connect_if_down() with the default c_path[0].
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This commit allows rds_send_pong() callers to send back
the rds pong message on some path other than c_path[0] by
passing in a struct rds_conn_path * argument. It also
removes the last dependency on the #defines in rds_single.h
from send.c
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
MP-capable transports
Explicitly set up rds_conn_path, either from i_conn_path (for
MP capable transpots) or as c_path[0], and use this in
rds_send_drop_to()
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pass a struct rds_conn_path to rds_send_xmit so that MP capable
transports can transmit packets on something other than c_path[0].
The eventual goal for MP capable transports is to hash the rds
socket to a path based on the bound local address/port, and use
this path as the argument to rds_send_xmit()
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Pass the rds_conn_path to rds_send_queue_rm, and use it to initialize
the i_conn_path field in struct rds_incoming. This commit also makes
rds_send_queue_rm() MP capable, because it now takes locks
specific to the rds_conn_path passed in, instead of defaulting to
the c_path[0] based defines from rds_single_path.h
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The only caller of rds_send_get_message() was
rds_iw_send_cq_comp_handler() which was removed as part of
commit dcdede0406d3 ("RDS: Drop stale iWARP RDMA transport"),
so remove rds_send_get_message() for the same reason.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
rds_send_path_drop_acked() is the path-specific version of
rds_send_drop_acked() to be invoked by MP capable callers.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
rds_send_path_reset() is the path specific version of rds_send_reset()
intended for MP capable callers.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
t_mp_capable transports can use rds_inc_path_init to initialize
all fields in struct rds_incoming, including the i_conn_path.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|