Age | Commit message (Collapse) | Author |
|
Minor changes, such as new extended port statistics.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2018-03-30
This series contains updates to mlx5 core and mlx5e netdev drivers.
The main highlight of this series is the RX optimizations for striding RQ path,
introduced by Tariq.
First Four patches are trivial misc cleanups.
- Spelling mistake fix
- Dead code removal
- Warning messages
RX optimizations for striding RQ:
1) RX refactoring, cleanups and micro optimizations
- MTU calculation simplifications, obsoletes some WQEs-to-packets translation
functions and helps delete ~60 LOC.
- Do not busy-wait a pending UMR completion.
- post the new values of UMR WQE inline, instead of using a data pointer.
- use pre-initialized structures to save calculations in datapath.
2) Use linear SKB in Striding RQ "build_skb", (Using linear SKB has many advantages):
- Saves a memcpy of the headers.
- No page-boundary checks in datapath.
- No filler CQEs.
- Significantly smaller CQ.
- SKB data continuously resides in linear part, and not split to
small amount (linear part) and large amount (fragment).
This saves datapath cycles in driver and improves utilization
of SKB fragments in GRO.
- The fragments of a resulting GRO SKB follow the IP forwarding
assumption of equal-size fragments.
implementation details:
HW writes the packets to the beginning of a stride,
i.e. does not keep headroom. To overcome this we make sure we can
extend backwards and use the last bytes of stride i-1.
Extra care is needed for stride 0 as it has no preceding stride.
We make sure headroom bytes are available by shifting the buffer
pointer passed to HW by headroom bytes.
This configuration now becomes default, whenever capable.
Of course, this implies turning LRO off.
Performance testing:
ConnectX-5, single core, single RX ring, default MTU.
UDP packet rate, early drop in TC layer:
--------------------------------------------
| pkt size | before | after | ratio |
--------------------------------------------
| 1500byte | 4.65 Mpps | 5.96 Mpps | 1.28x |
| 500byte | 5.23 Mpps | 5.97 Mpps | 1.14x |
| 64byte | 5.94 Mpps | 5.96 Mpps | 1.00x |
--------------------------------------------
TCP streams: ~20% gain
3) Support XDP over Striding RQ:
Now that linear SKB is supported over Striding RQ,
we can support XDP by setting stride size to PAGE_SIZE
and headroom to XDP_PACKET_HEADROOM.
Striding RQ is capable of a higher packet-rate than
conventional RQ.
Performance testing:
ConnectX-5, 24 rings, default MTU.
CQE compression ON (to reduce completions BW in PCI).
XDP_DROP packet rate:
--------------------------------------------------
| pkt size | XDP rate | 100GbE linerate | pct% |
--------------------------------------------------
| 64byte | 126.2 Mpps | 148.0 Mpps | 85% |
| 128byte | 80.0 Mpps | 84.8 Mpps | 94% |
| 256byte | 42.7 Mpps | 42.7 Mpps | 100% |
| 512byte | 23.4 Mpps | 23.4 Mpps | 100% |
--------------------------------------------------
4) Remove mlx5 page_ref bulking in Striding RQ and use page_ref_inc only when needed.
Without this bulking, we have:
- no atomic ops on WQE allocation or free
- one atomic op per SKB
- In the default MTU configuration (1500, stride size is 2K),
the non-bulking method execute 2 atomic ops as before
- For larger MTUs with stride size of 4K, non-bulking method
executes only a single op.
- For XDP (stride size of 4K, no SKBs), non-bulking have no atomic ops per packet at all.
Performance testing:
ConnectX-5, Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz.
Single core packet rate (64 bytes).
Early drop in TC: no degradation.
XDP_DROP:
before: 14,270,188 pps
after: 20,503,603 pps, 43% improvement.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The variables, msg and data, have the same value. This patch removes
the extra one.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Rather than use an on-stack array to copy a broadcast address, use
the generic eth_broadcast_addr function to save a trivial amount of
object code.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This function calls call_netdevice_notifier(), which also
may take net_rwsem. So, we can't use net_rwsem here.
This patch makes callers of this functions take pernet_ops_rwsem,
like register_netdevice_notifier() does. This will protect
the modifications of net_namespace_list, and allows notifiers
to take it (they won't have to care about context).
Since __rtnl_link_unregister() is used on module load
and unload (which are not frequent operations), this looks
for me better, than make all call_netdevice_notifier()
always executing in "protected net_namespace_list" context.
Also, this fixes the problem we had a deal in 328fbe747ad4
"Close race between {un, }register_netdevice_notifier and ...",
and guarantees __rtnl_link_unregister() does not skip
exitting net.
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There is no need for explicit calls of devm_kfree(), as the allocated
memory will be freed during driver's detach.
The driver core clears the driver data to NULL after device_release.
Thus, it is not needed to manually clear the device driver data to NULL.
So remove the unnecessary pci_set_drvdata() and devm_kfree().
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Change nsim_devlink_setup to return any error back to the caller and
update nsim_init to handle it.
Requested-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ndo_set_rx_mode() is called from atomic context which causes
messages response timeouts while VF to PF communication via MSIx.
To get rid of that we're copy passed mc list, parse flags and queue
handling of kernel request to ordered workqueue.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
request
The kernel calls ndo_set_rx_mode() callback from atomic context which
causes messaging timeouts between VF and PF (as they’re implemented via
MSIx). So in order to handle ndo_set_rx_mode() we need to get rid of it.
This commit implements necessary workqueue related structures to let VF
queue kernel request processing in non-atomic context later.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This commit is to add message handling for ndo_set_rx_mode()
callback at PF side.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The kernel calls ndo_set_rx_mode() callback supplying it will all necessary
info, such as device state flags, multicast mac addresses list and so on.
Since we have only 128 bits to communicate with PF we need to initiate
several requests to PF with small/short operation each based on input data.
So this commit implements following PF messages codes along with new
data structures for them:
NIC_MBOX_MSG_RESET_XCAST to flush all filters configured for this
particular network interface (VF)
NIC_MBOX_MSG_ADD_MCAST to add new MAC address to DMAC filter registers
for this particular network interface (VF)
NIC_MBOX_MSG_SET_XCAST to apply filtering configuration to filter control
register
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ThunderX NIC could be partitioned to up to 128 VFs and thus
represented to system. Each VF is mapped to pair BGX:LMAC, and each of VF
is configured by kernel individually. Eventually the bunch of VFs could be
mapped onto same pair BGX:LMAC and thus could cause several multicast
filtering configuration requests to LMAC with the same MAC addresses.
This commit is to add ThunderX NIC BGX filtering manipulation routines.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ThunderX NIC has two Ethernet Interfaces (BGX) each of them could has
up to four Logical MACs configured. Each of BGX has 32 filters to be
configured for filtering ingress packets. The number of filters available
to particular LMAC is from 8 (if we have four LMACs configured per BGX)
up to 32 (in case of only one LMAC is configured per BGX).
At the same time the NIC could present up to 128 VFs to OS as network
interfaces, each of them kernel will configure with set of MAC addresses
for filtering. So to prevent dupes in BGX filter registers from different
network interfaces it is required to cache and track all filter
configuration requests prior to applying them onto BGX filter registers.
This commit is to update LMAC structures with control fields to
allocate/releasing filters tracking list along with implementing
dmac array allocate/release per LMAC.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ThunderX NIC has set of registers which allows to configure
filter policy for ingress packets. There are three possible regimes
of filtering multicasts, broadcasts and unicasts: accept all, reject all
and accept filter allowed only.
Current implementation has enum with all of them and two generic macro
for enabling filtering et all (CAM_ACCEPT) and enabling/disabling
broadcast packets, which also should be corrected in order to represent
register bits properly. All these values are private for driver and
there is no need to ‘publish’ them via header file.
This commit is to move filtering register manipulation values from
header file into source with explicit assignment of exact register
values to them to be used while register configuring.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The Meson8m2 SoC uses a similar (potentially even identical) register
layout as the Meson8b and GXBB SoCs for the dwmac glue.
Add a new compatible string and update the module description to
indicate support for these SoCs.
Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Rename pcie-dpc.c to dpc.c. The path "drivers/pci/pcie/pcie-dpc.c" has
more occurrences of "pci" than necessary.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
Cache some config data from VF0 and use it for all other VFs instead of
reading it from the config space of each VF. We assume these items are the
same across all associated VFs:
Revision ID
Class Code
Subsystem Vendor ID
Subsystem ID
This is an optimization when enabling SR-IOV on a device with many VFs.
Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
[bhelgaas: changelog, simplify comments, remove unused "device", test
CONFIG_PCI_IOV instead of CONFIG_PCI_ATS, rename functions]
Signed-off-by: Bjorn Helgaas <helgaas@kernel.org>
|
|
This issue happens on new ASUS laptop UX331UA which has modern
standby mode (suspend-to-idle). Pressing keys on the PS2 keyboard
can't wake up the system from suspend-to-idle which is not expected.
However, pressing power button can wake up without problem.
Per the engineers of ASUS, the keypress event is routed to Embedded
Controller (EC) in standby mode. EC then signals the SCI event to
BIOS so BIOS would Notify() power button to wake up the system. It's
from BIOS perspective. What we observe here is that kernel receives
the SCI event from SCI interrupt handler which informs that the GPE
status bit belongs to EC needs to be handled and then queries the EC
to find out what event is pending. Then execute the following ACPI
_QDF method which defined in ACPI DSDT for EC to notify power button.
Method (_QDF, 0, NotSerialized) // _Qxx: EC Query
{
Notify (PWRB, 0x80) // Status Change
}
With more debug messages added to analyze this problem, we find that
the keypress does wake up the system from suspend-to-idle but it's back
to suspend again almost immediately. As we see in the following messages,
the acpi_button_notify() is invoked but acpi_pm_wakeup_event() can not
really wake up the system here because acpi_s2idle_wakeup() is false.
The acpi_s2idle_wakeup() returnd false because the acpi_s2idle_sync() has
alrealdy exited.
[ 52.987048] s2idle_loop going s2idle
[ 59.713392] acpi_s2idle_wake enter
[ 59.713394] acpi_s2idle_wake exit
[ 59.760888] acpi_ev_gpe_detect enter
[ 59.760893] acpi_s2idle_sync enter
[ 59.760893] acpi_ec_query_flushed ec pending queries 0
[ 59.760953] Read registers for GPE 50-57: Status=01, Enable=01, RunEnable=01, WakeEnable=00
[ 59.760955] ACPI: EC: ===== IRQ (1) =====
[ 59.760972] ACPI: EC: EC_SC(R) = 0x28 SCI_EVT=1 BURST=0 CMD=1 IBF=0 OBF=0
[ 59.760979] ACPI: EC: +++++ Polling enabled +++++
[ 59.760979] ACPI: EC: ##### Command(QR_EC) submitted/blocked #####
[ 59.761003] acpi_s2idle_sync exit
[ 59.769587] ACPI: EC: ##### Query(0xdf) started #####
[ 59.769611] ACPI: EC: ##### Query(0xdf) stopped #####
[ 59.774154] acpi_button_notify button type 1
[ 59.813175] s2idle_loop going s2idle
acpi_s2idle_sync() already makes an effort to flush the EC event
queue, but in this case, the EC event has yet to be generated when
the call to acpi_ec_flush_work() is made. The event is generated
shortly after, through the ongoing handling of the SCI interrupt
which is happening on another CPU, and we must synchronize that
to make sure that it has run and completed. Adding another call to
acpi_os_wait_events_complete() solves this issue, since that
function synchronizes with SCI interrupt completion.
Signed-off-by: Chris Chiu <chiu@endlessm.com>
[ rjw: Subject ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Pull networking fixes from David Miller:
1) Fix RCU locking in xfrm_local_error(), from Taehee Yoo.
2) Fix return value assignments and thus error checking in
iwl_mvm_start_ap_ibss(), from Johannes Berg.
3) Don't count header length twice in vti4, from Stefano Brivio.
4) Fix deadlock in rt6_age_examine_exception, from Eric Dumazet.
5) Fix out-of-bounds access in nf_sk_lookup_slow{v4,v6}() from Subash
Abhinov.
6) Check nladdr size in netlink_connect(), from Alexander Potapenko.
7) VF representor SQ numbers are 32 not 16 bits, in mlx5 driver, from
Or Gerlitz.
8) Out of bounds read in skb_network_protocol(), from Eric Dumazet.
9) r8169 driver sets driver data pointer after register_netdev() which
is too late. Fix from Heiner Kallweit.
10) Fix memory leak in mlx4 driver, from Moshe Shemesh.
11) The multi-VLAN decap fix added a regression when dealing with device
that lack a MAC header, such as tun. Fix from Toshiaki Makita.
12) Fix integer overflow in dynamic interrupt coalescing code. From Tal
Gilboa.
13) Use after free in vrf code, from David Ahern.
14) IPV6 route leak between VRFs fix, also from David Ahern.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (81 commits)
net: mvneta: fix enable of all initialized RXQs
net/ipv6: Fix route leaking between VRFs
vrf: Fix use after free and double free in vrf_finish_output
ipv6: sr: fix seg6 encap performances with TSO enabled
net/dim: Fix int overflow
vlan: Fix vlan insertion for packets without ethernet header
net: Fix untag for vlan packets without ethernet header
atm: iphase: fix spelling mistake: "Receiverd" -> "Received"
vhost: validate log when IOTLB is enabled
qede: Do not drop rx-checksum invalidated packets.
hv_netvsc: enable multicast if necessary
ip_tunnel: Resolve ipsec merge conflict properly.
lan78xx: Crash in lan78xx_writ_reg (Workqueue: events lan78xx_deferred_multicast_write)
qede: Fix barrier usage after tx doorbell write.
vhost: correctly remove wait queue during poll failure
net/mlx4_core: Fix memory leak while delete slave's resources
net/mlx4_en: Fix mixed PFC and Global pause user control requests
net/smc: use announced length in sock_recvmsg()
llc: properly handle dev_queue_xmit() return value
strparser: Fix sign of err codes
...
|
|
Upon a new UMR post, check if the WQE buffer contains
a previous UMR WQE. If so, modify the dynamic fields
instead of a whole WQE overwrite. This saves a memcpy.
In current setting, after 2 WQ cycles (12 UMR posts),
this will always be the case.
No degradation sensed.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
All UMR WQEs of an RQ share many common fields. We use
pre-initialized structures to save calculations in datapath.
One field (xlt_offset) was the only reason we saved a pre-initialized
copy per WQE index.
Here we remove its initialization (move its calculation to datapath),
and reduce the number of copies to one-per-RQ.
A very small datapath calculation is added, it occurs once per a MPWQE
(i.e. once every 256KB), but reduces memory consumption and gives
better cache utilization.
Performance testing:
Tested packet rate, no degradation sensed.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
When many packets reside on the same page, the bulking of
page_ref modifications reduces the total number of atomic
operations executed.
Besides the necessary 2 operations on page alloc/free, we
have the following extra ops per page:
- one on WQE allocation (bump refcnt to maximum possible),
- zero ops for SKBs,
- one on WQE free,
a constant of two operations in total, no matter how many
packets/SKBs actually populate the page.
Without this bulking, we have:
- no ops on WQE allocation or free,
- one op per SKB,
Comparing the two methods when PAGE_SIZE is 4K:
- As mentioned above, bulking method always executes 2 operations,
not more, but not less.
- In the default MTU configuration (1500, stride size is 2K),
the non-bulking method execute 2 ops as well.
- For larger MTUs with stride size of 4K, non-bulking method
executes only a single op.
- For XDP (stride size of 4K, no SKBs), non-bulking method
executes no ops at all!
Hence, to optimize the flows with linear SKB and XDP over Striding RQ,
we here remove the page_ref bulking method.
Performance testing:
ConnectX-5, Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz.
Single core packet rate (64 bytes).
Early drop in TC: no degradation.
XDP_DROP:
before: 14,270,188 pps
after: 20,503,603 pps, 43% improvement.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Add XDP support over Striding RQ.
Now that linear SKB is supported over Striding RQ,
we can support XDP by setting stride size to PAGE_SIZE
and headroom to XDP_PACKET_HEADROOM.
Upon a MPWQE free, do not release pages that are being
XDP xmit, they will be released upon completions.
Striding RQ is capable of a higher packet-rate than
conventional RQ.
A performance gain is expected for all cases that had
a HW packet-rate bottleneck. This is the case whenever
using many flows that distribute to many cores.
Performance testing:
ConnectX-5, 24 rings, default MTU.
CQE compression ON (to reduce completions BW in PCI).
XDP_DROP packet rate:
--------------------------------------------------
| pkt size | XDP rate | 100GbE linerate | pct% |
--------------------------------------------------
| 64byte | 126.2 Mpps | 148.0 Mpps | 85% |
| 128byte | 80.0 Mpps | 84.8 Mpps | 94% |
| 256byte | 42.7 Mpps | 42.7 Mpps | 100% |
| 512byte | 23.4 Mpps | 23.4 Mpps | 100% |
--------------------------------------------------
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Make the xdp_xmit indication available for Striding RQ
by taking it out of the type-specific union.
This refactor is a preparation for a downstream patch that
adds XDP support over Striding RQ.
In addition, use a bitmap instead of a boolean for possible
future flags.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Current Striding RQ HW feature utilizes the RX buffers so that
there is no wasted room between the strides. This maximises
the memory utilization.
This prevents the use of build_skb() (which requires headroom
and tailroom), and demands to memcpy the packets headers into
the skb linear part.
In this patch, whenever a set of conditions holds, we apply
an RQ configuration that allows combining the use of linear SKB
on top of a Striding RQ.
To use build_skb() with Striding RQ, the following must hold:
1. packet does not cross a page boundary.
2. there is enough headroom and tailroom surrounding the packet.
We can satisfy 1 and 2 by configuring:
stride size = MTU + headroom + tailoom.
This is possible only when:
a. (MTU - headroom - tailoom) does not exceed PAGE_SIZE.
b. HW LRO is turned off.
Using linear SKB has many advantages:
- Saves a memcpy of the headers.
- No page-boundary checks in datapath.
- No filler CQEs.
- Significantly smaller CQ.
- SKB data continuously resides in linear part, and not split to
small amount (linear part) and large amount (fragment).
This saves datapath cycles in driver and improves utilization
of SKB fragments in GRO.
- The fragments of a resulting GRO SKB follow the IP forwarding
assumption of equal-size fragments.
Some implementation details:
HW writes the packets to the beginning of a stride,
i.e. does not keep headroom. To overcome this we make sure we can
extend backwards and use the last bytes of stride i-1.
Extra care is needed for stride 0 as it has no preceding stride.
We make sure headroom bytes are available by shifting the buffer
pointer passed to HW by headroom bytes.
This configuration now becomes default, whenever capable.
Of course, this implies turning LRO off.
Performance testing:
ConnectX-5, single core, single RX ring, default MTU.
UDP packet rate, early drop in TC layer:
--------------------------------------------
| pkt size | before | after | ratio |
--------------------------------------------
| 1500byte | 4.65 Mpps | 5.96 Mpps | 1.28x |
| 500byte | 5.23 Mpps | 5.97 Mpps | 1.14x |
| 64byte | 5.94 Mpps | 5.96 Mpps | 1.00x |
--------------------------------------------
TCP streams: ~20% gain
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
When modifying the page mapping of a HW memory region
(via a UMR post), post the new values inlined in WQE,
instead of using a data pointer.
This is a micro-optimization, inline UMR WQEs of different
rings scale better in HW.
In addition, this obsoletes a few control flows and helps
delete ~50 LOC.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Do not busy-wait a pending UMR completion. Under high HW load,
busy-waiting a delayed completion would fully utilize the CPU core
and mistakenly indicate a SW bottleneck.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Gets the process of a UMR WQE post in one function,
in preparation for a downstream patch that inlines
the WQE data.
No functional change here.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
In Striding RQ, each WQE serves multiple packets
(hence called Multi-Packet WQE, MPWQE).
The size of a MPWQE is constant (currently 256KB).
Upon a ringparam set operation, we calculate the number of
MPWQEs per RQ. For this, first it is needed to determine the
number of packets that can reside within a single MPWQE.
In this patch we use the actual MTU size instead of ETH_DATA_LEN
for this calculation.
This implies that a change in MTU might require a change
in Striding RQ ring size.
In addition, this obsoletes some WQEs-to-packets translation
functions and helps delete ~60 LOC.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Knowing the MTU is required for RQ creation flow.
By our design, channels creation flow is totally isolated
from priv/netdev, and can be completed with access to
channels params and mdev.
Adding the MTU to the channels params helps preserving that.
In addition, we save it in RQ to make its access faster in
datapath checks.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Fix spelling mistake in debug message text.
"dettaching" -> "detaching"
Signed-off-by: Talat Batheesh <talatb@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
With ConnectX-4, we expect the force teardown to fail in case that
DC was enabled, therefore change the message from error to warning.
Signed-off-by: Alaa Hleihel <alaa@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
1. This function is not used anywhere in mlx5 driver
2. It has a memcpy statement that makes no sense and produces build
warning with gcc8
drivers/net/ethernet/mellanox/mlx5/core/transobj.c: In function 'mlx5_core_query_xsrq':
drivers/net/ethernet/mellanox/mlx5/core/transobj.c:347:3: error: 'memcpy' source argument is the same as destination [-Werror=restrict]
Fixes: 01949d0109ee ("net/mlx5_core: Enable XRCs and SRQs when using ISSI > 0")
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Instead of looking for the EQ of the CQ, remove that redundant code and
use the eq pointer stored in the cq struct.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Commit eed85ff4c0da ("PCI/DPC: Enable DPC only if AER is available") made
DPC control dependent whether AER is enabled in the OS. However, it does
not take into account situations where BIOS has not given OS control of
AER:
acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
acpi PNP0A08:00: _OSC: platform does not support [AER]
acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
I think here it is better not to enable DPC even if the capability is
available because then it would be against what "Determination of DPC
Control" note in PCIe 4.0 sec 6.1.10 recommends.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Bjorn Helgaas <helgaas@kernel.org>
|
|
Because I will be leaving Samsung soon, for reachability update
my reference e-mail to etezian.org.
Signed-off-by: Andi Shyti <andi.shyti@samsung.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
Executing stmfts_power_on() function lasts over 2 seconds, what
significantly slows down the boot and resume processes if driver is
compiled in. Avoid this delay by forcing this driver to be probed
and suspended/resumed asynchronously.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
The primary interface for the touchpad device in Thinkpad L570 is SMBus,
so ALPS overlooked PS2 interface Firmware setting of TrackStick, and
shipped with TrackStick otp bit is disabled.
The address 0xD7 contains device number information, so we can identify
the device by checking this value, but to access it we need to enable
Command mode, and then re-enable the device. Devices shipped in Thinkpad
L570 report either 0x0C or 0x1D as device numbers, if we see them we assume
that the devices are DualPoints.
The same issue exists on Dell Latitude 7370.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=196929
Fixes: 646580f793 ("Input: ALPS - fix multi-touch decoding on SS4 plus touchpads")
Signed-off-by: Masaki Ota <masaki.ota@jp.alps.com>
Tested-by: Aaron Ma <aaron.ma@canonical.com>
Tested-by: Jonathan Liu <net147@gmail.com>
Tested-by: Jaak Ristioja <jaak@ristioja.ee>
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
Replace pci_find_ext_capability(..., PCI_EXT_CAP_ID_ERR) calls with
pci_dev->aer_cap.
pci_dev->aer_cap is initialized in pci_init_capabilities(), which happens
before any of these users of the AER Capability.
Signed-off-by: Frederick Lawler <fred@fredlawl.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
The platform may restrict the OS's use of PCIe services, e.g., via the ACPI
_OSC method. The user may use "pcie_ports=native" to force the port driver
to use PCIe services even if the platform asked us not to.
The "pcie_ports=native" parameter determines the setting of
pcie_ports_auto. Rename this to pcie_ports_native and reverse the
sense to simplify the code.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
"pcie_ports_auto" is only used inside the PCIe port driver itself, so
move it from include/linux/pci.h to portdrv.h so it's not visible to the
whole kernel.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
The "pcie_ports=auto" parameter set pcie_ports_disabled and pcie_ports_auto
to their compiled-in defaults, so specifying the parameter is the same as
not using it at all.
Remove the "pcie_ports=auto" parameter and update the documentation.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
7570a333d8b0 ("PCI: Add pcie_hp=nomsi to disable MSI/MSI-X for pciehp
driver") added the "pcie_hp=nomsi" kernel parameter to work around this
error on shutdown:
irq 16: nobody cared (try booting with the "irqpoll" option)
Pid: 1081, comm: reboot Not tainted 3.2.0 #1
...
Disabling IRQ #16
This happened on an unspecified system (possibly involving the Integrated
Device Technology, Inc. Device 807f bridge) where "an un-wanted interrupt
is generated when PCI driver switches from MSI/MSI-X to INTx while shutting
down the device."
The implication was that the device was buggy, but it is normal for a
device to use INTx after MSI/MSI-X have been disabled. The only problem
was that the driver was still attached and it wasn't prepared for INTx
interrupts. Prarit Bhargava fixed this issue with fda78d7a0ead ("PCI/MSI:
Stop disabling MSI/MSI-X in pci_device_shutdown()").
There is no automated way to set this parameter, so it's not very useful
for distributions or end users. It's really only useful for debugging, and
we have "pci=nomsi" for that purpose.
Revert 7570a333d8b0 to remove the "pcie_hp=nomsi" parameter.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
CC: MUNEDA Takahiro <muneda.takahiro@jp.fujitsu.com>
CC: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
CC: Prarit Bhargava <prarit@redhat.com>
|
|
portdrv_pci.c doesn't use anything from <linux/pci-aspm.h>. Remove the
include of it. No functional change intended.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Some PCIe features (AER, DPC, hotplug, PME) can be managed by either the
platform firmware or the OS, so the host bridge driver may have to request
permission from the platform before using them. On ACPI systems, this is
done by negotiate_os_control() in acpi_pci_root_add().
The PCIe port driver later uses pcie_port_platform_notify() and
pcie_port_acpi_setup() to figure out whether it can use these features.
But all we need is a single bit for each service, so these interfaces are
needlessly complicated.
Simplify this by adding bits in the struct pci_host_bridge to show when the
OS has permission to use each feature:
+ unsigned int native_aer:1; /* OS may use PCIe AER */
+ unsigned int native_hotplug:1; /* OS may use PCIe hotplug */
+ unsigned int native_pme:1; /* OS may use PCIe PME */
These are set when we create a host bridge, and the host bridge driver can
clear the bits corresponding to any feature the platform doesn't want us to
use.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
No driver registers for PCIE_PORT_SERVICE_VC, so remove it.
This removes the VC "service" files from /sys/bus/pci_express/devices,
e.g., 0000:07:00.0:pcie108, 0000:08:04.0:pcie208 (all the files that
contained "8" as the last digit of the "pcieXXX" part). The port driver
created these files for PCIe port devices that have a VC Capability.
Since this reduces PCIE_PORT_DEVICE_MAXSERVICES and moves DPC down into the
spot where VC used to be, the DPC sysfs files will now be named "pcieXX8".
I don't think there's anything useful userspace can do with those files, so
I hope nobody cares about these filenames.
There is no VC driver that calls pcie_port_service_register(), so there
never was a /sys/bus/pci_express/drivers/vc directory.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
The pcie_port_bus_type must be registered before drivers that depend on it
can be registered. Those drivers include:
pcied_init() # PCIe native hotplug driver
aer_service_init() # AER driver
dpc_service_init() # DPC driver
pcie_pme_service_init() # PME driver
Previously we registered pcie_port_bus_type from pcie_portdrv_init(), a
device_initcall. The callers of pcie_port_service_register() (above) are
also device_initcalls. This is fragile because the device_initcall
ordering depends on link order, which is not explicit.
Register pcie_port_bus_type from pci_driver_init() along with pci_bus_type.
This removes the link order dependency between portdrv and the pciehp, AER,
DPC, and PCIe PME drivers.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
|
|
The "pcie_ports=compat" kernel parameter sets pcie_ports_disabled, which is
intended to disable the PCIe port driver. But even when it was disabled,
we registered pcie_portdriver so we could work around a BIOS PME issue (see
fe31e69740ed ("PCI/PCIe: Clear Root PME Status bits early during system
resume")).
Registering the driver meant that the pcie_portdrv_probe() path called
pci_enable_device(), pci_save_state(), pm_runtime_set_autosuspend_delay(),
pm_runtime_use_autosuspend(), etc., even when the driver was disabled.
We've since moved the BIOS PME workaround from the port driver to the core,
so stop registering the PCIe port driver in compat mode.
This means "pcie_ports=compat" will now be basically the same as turning
off CONFIG_PCIEPORTBUS completely.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Per PCIe r4.0, sec 6.1.6, Root Complex Event Collectors can generate PME
interrupts on behalf of Root Complex Integrated Endpoints.
Linux does not currently enable PME interrupts from RC Event Collectors,
but fe31e69740ed ("PCI/PCIe: Clear Root PME Status bits early during system
resume") suggests PME interrupts may be enabled by the platform for ACPI-
based runtime wakeup.
Clear the PCIe PME Status bit for Root Complex Event Collectors during
resume, just like we already do for Root Ports.
If the BIOS enables PME interrupts for an event collector and neglects to
clear the status bit on resume, this change should fix the same bug as
fe31e69740ed (PMEs not working after waking from a sleep state), but for
Root Complex Integrated Endpoints.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Add pcie_bandwidth_capable() to compute the max link bandwidth supported by
a device, based on the max link speed and width, adjusted by the encoding
overhead.
The maximum bandwidth of the link is computed as:
max_link_width * max_link_speed * (1 - encoding_overhead)
2.5 and 5.0 GT/s links use 8b/10b encoding, which reduces the raw bandwidth
available by 20%; 8.0 GT/s and faster links use 128b/130b encoding, which
reduces it by about 1.5%.
The result is in Mb/s, i.e., megabits/second, of raw bandwidth.
Signed-off-by: Tal Gilboa <talgi@mellanox.com>
[bhelgaas: add 16 GT/s, adjust for pcie_get_speed_cap() and
pcie_get_width_cap() signatures, don't export outside drivers/pci]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|