Age | Commit message (Collapse) | Author |
|
clk_ops are not supposed to change at runtime. All functions
working with clk_ops provided by <linux/clk-provider.h> work
with const clk_ops. So mark the non-const clk_ops as const.
Here, Function "clk_reg_sysctrl" is used to initialized clk_init_data.
clk_init_data is working with const clk_ops. So make clk_reg_sysctrl
non-const clk_ops argument as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Acked-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
clk_ops are not supposed to change at runtime. All functions
working with clk_ops provided by <linux/clk-provider.h> work
with const clk_ops. So mark the non-const clk_ops as const.
Here, Function "clk_reg_prcmu" is used to initialized clk_init_data.
clk_init_data is working with const clk_ops. So make clk_reg_prcmu
non-const clk_ops argument as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Acked-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2017-08-31 (GRE Offloads support)
This series provides the support for MPLS RSS and GRE TX offloads and
RSS support.
The first patch from Gal and Ariel provides the mlx5 driver support for
ConnectX capability to perform IP version identification and matching in
order to distinguish between IPv4 and IPv6 without the need to specify the
encapsulation type, thus perform RSS in MPLS automatically without
specifying MPLS ethertyoe. This patch will also serve for inner GRE IPv4/6
classification for inner GRE RSS.
2nd patch from Gal, Adds the TX offloads support for GRE tunneled packets,
by reporting the needed netdev features.
3rd patch from Gal, Adds GRE inner RSS support by creating the needed device
resources (Steering Tables/rules and traffic classifiers) to Match GRE traffic
and perform RSS hashing on the inner headers.
Improvement:
Testing 8 TCP streams bandwidth over GRE:
System: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
NIC: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
Before: 21.3 Gbps (Single RQ)
Now : 90.5 Gbps (RSS spread on 8 RQs)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Fix crash in linux PF driver when BARs have been cleared/de-programmed;
fail early init (prior to mapping BARs) if the BAR0 or
BAR1 registers are zero.
This situation can arise when the PF is added to a VM (PCI pass-through),
then a PF FLR is issued (in the VM). After this occurs, the BAR registers
will be zero. If we attempt to load the PF driver in the host
(after VM has been shutdown), the host can reset.
Signed-off-by: Rick Farrington <ricardo.farrington@cavium.com>
Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@cavium.com>
Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
cxl keeps a driver use count, which is used with the hash memory model
on p8 to know when to upgrade local TLBIs to global and to trigger
callbacks to manage the MMU for PSL8.
If a process opens a context and closes without attaching or fails the
attachment, the driver use count is never decremented. As a
consequence, TLB invalidations remain global, even if there are no
active cxl contexts.
We should increment the driver use count when the process is attaching
to the cxl adapter, and not on open. It's not needed before the
adapter starts using the context and the use count is decremented on
the detach path, so it makes more sense.
It affects only the user api. The kernel api is already doing The
Right Thing.
Signed-off-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org # v4.2+
Fixes: 7bb5d91a4dda ("cxl: Rework context lifetimes")
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
We need to add memory barrier so that the page table walk doesn't happen
before the cpumask is set and made visible to the other cpus. We need
to use a sync here instead of lwsync because lwsync is not sufficient for
store/load ordering.
We also need to add an if (mm) check so that we do the right thing when called
with a kernel context. For kernel context, we have mm = NULL. W.r.t kernel
address we can skip setting the mm cpumask.
Fixes: 0f4bc0932e ("powerpc/mm/cxl: Add the fault handling cpu to mm cpumask")
Cc: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Move the GET_FIELD and SET_FIELD macros to vas.h as VAS and other
users of VAS, including NX-842 can use those macros.
There is a lot of related code between the VAS/NX kernel drivers
and skiboot. For consistency, switch the order of parameters in
SET_FIELD to match the order in skiboot.
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Reviewed-by: Dan Streetman <ddstreet@ieee.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Currently, FlexRM driver uses txdone_poll method of Linux Mailbox
to model the send_data() callback. To achieve this, we have introduced
"last_pending_msg" pointer for each FlexRM ring which keeps track of
the message that did not fit in the FlexRM ring.
This patch updates FlexRM driver to use txdone_ack method instead of
txdone_poll method because txdone_poll is not efficient for FlexRM
and requires additional tracking in FlexRM driver.
Also, moving to txdone_ack method helps us remove "last_pending_msg"
pointer and last_tx_done() callback.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
|
|
Currently, we are using IDA library for managing IDs
on a FlexRM ring. The IDA library dynamically allocates
memory for underlying data structures which can cause
potential locking issue when allocating/free IDs from
flexrm_new_request() and flexrm_process_completions().
To tackle this, we replace use of IDA with bitmap for
each FlexRM ring and also protect the bitmap with FlexRM
ring lock.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
|
|
The mask used in CMPL_START_ADDR_VALUE() should be 27bits instead of
26bits. This incorrect mask was causing completion writes to 40bits
physical address fail.
This patch fixes mask used in CMPL_START_ADDR_VALUE() macro.
Fixes: dbc049eee730 ("mailbox: Add driver for Broadcom FlexRM
ring manager")
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
|
|
This patch adds debugfs support to Broadcom FlexRM driver
so that we can see FlexRM ring state when any issue happens.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Vikram Prakash <vikram.prakash@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
|
|
This patch set IRQ affinity hint for FlexRM ring IRQ at time of
enabling ring (i.e. flexrm_startup()). The IRQ affinity hint will
allow FlexRM driver to distribute FlexRM ring IRQs across online
CPUs so that all FlexRM ring IRQs don't land in CPU0 by default.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
|
|
After relogin is sucessful, "send_els_logo" flag needs to be
reinitialized. This will allow next re-login to happen successfully.
In target mode, this flag was not reset correctly, causing IO's failure
during reset recovery and port ON/OFF test cases from initiator.
Signed-off-by: Quinn Tran <quinn.tran@cavium.com>
Signed-off-by: Sawan Chandak <sawan.chandak@cavium.com>
Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Call Trace:
[<ffffffff81341687>] dump_stack+0x6b/0xa4
[<ffffffff810c3e30>] ? print_irqtrace_events+0xd0/0xe0
[<ffffffff8109e3c3>] ___might_sleep+0x183/0x240
[<ffffffff8109e4d2>] __might_sleep+0x52/0x90
[<ffffffff811fe17b>] kmem_cache_alloc_trace+0x5b/0x300
[<ffffffff810c666b>] ? __lock_acquired+0x30b/0x420
[<ffffffffa0733c28>] qla2x00_alloc_fcport+0x38/0x2a0 [qla2xxx]
[<ffffffffa07217f4>] ? qla2x00_do_work+0x34/0x2b0 [qla2xxx]
[<ffffffff816cc82b>] ? _raw_spin_lock_irqsave+0x7b/0x90
[<ffffffffa072169a>] ? qla24xx_create_new_sess+0x3a/0x160 [qla2xxx]
[<ffffffffa0721723>] qla24xx_create_new_sess+0xc3/0x160 [qla2xxx]
[<ffffffff810c91ed>] ? trace_hardirqs_on+0xd/0x10
[<ffffffffa07218f8>] qla2x00_do_work+0x138/0x2b0 [qla2xxx]
Signed-off-by: Quinn Tran <quinn.tran@qlogic.com>
Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Signed-off-by: Darren Trap <darren.trap@cavium.com>
Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Since commit 7401bc18d1ee ("scsi: qla2xxx: Add FC-NVMe command
handling") we make use of 'struct nvmefc_fcp_req' in
qla24xx_nvme_iocb_entry() without including linux/nvme-fc-driver.h where
it is defined.
Add linux/nvme-fc-driver.h (and scsi/fc/fc_fs.h as nvme-fc-driver.h
needs the definition of 'struct fc_ba_rjt' from scsi/fc/fc_fs.h) to the
header files included by qla_isr.c.
Fixes: 7401bc18d1ee ("scsi: qla2xxx: Add FC-NVMe command handling")
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Acked-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
The value of "size" comes from the user. When we add "start + size" it
could lead to an integer overflow bug.
It means we vmalloc() a lot more memory than we had intended. I believe
that on 64 bit systems vmalloc() can succeed even if we ask it to
allocate huge 4GB buffers. So we would get memory corruption and likely
a crash when we call ha->isp_ops->write_optrom() and ->read_optrom().
Only root can trigger this bug.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=194061
Cc: <stable@vger.kernel.org>
Fixes: b7cc176c9eb3 ("[SCSI] qla2xxx: Allow region-based flash-part accesses.")
Reported-by: shqking <shqking@gmail.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
aac_convert_sgraw2() kmalloc memory and return -1 on error, which should
be -ENOMEM. However, nobody is checking return value, so with this
change, -ENOMEM is propagated to upper layer.
Signed-off-by: Nikola Pajkovsky <npajkovsky@suse.cz>
Reviewed-by: Raghava Aditya Renukunta <RaghavaAditya.Renukunta@microsemi.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
unsigned long byte_count = 0;
nseg = scsi_dma_map(scsicmd);
if (nseg < 0)
return nseg;
if (nseg) {
...
}
return byte_count;
is equal to
unsigned long byte_count = 0;
nseg = scsi_dma_map(scsicmd);
if (nseg <= 0)
return nseg;
...
return byte_count;
No other code has changed.
[mkp: fix checkpatch complaints]
Signed-off-by: Nikola Pajkovsky <npajkovsky@suse.cz>
Reviewed-by: Raghava Aditya Renukunta <RaghavaAditya.Renukunta@microsemi.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
fix stupid indent error, no rocket science here.
Signed-off-by: Nikola Pajkovsky <npajkovsky@suse.cz>
Reviewed-by: Raghava Aditya Renukunta <RaghavaAditya.Renukunta@microsemi.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
When storvsc is sending I/O to Hyper-v, it may allocate a bigger buffer
descriptor for large data payload that can't fit into a pre-allocated
buffer descriptor. This bigger buffer is freed on return path.
If I/O request to Hyper-v fails due to ring buffer busy, the storvsc
allocated buffer descriptor should also be freed.
[mkp: applied by hand]
Fixes: be0cf6ca301c ("scsi: storvsc: Set the tablesize based on the information given by the host")
Cc: <stable@vger.kernel.org>
Signed-off-by: Long Li <longli@microsoft.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
This reverts commit 7ad813f208533cebfcc32d3d7474dc1677d1b09a ("net: phy:
Correctly process PHY_HALTED in phy_stop_machine()") because it is
creating the possibility for a NULL pointer dereference.
David Daney provide the following call trace and diagram of events:
When ndo_stop() is called we call:
phy_disconnect()
+---> phy_stop_interrupts() implies: phydev->irq = PHY_POLL;
+---> phy_stop_machine()
| +---> phy_state_machine()
| +----> queue_delayed_work(): Work queued.
+--->phy_detach() implies: phydev->attached_dev = NULL;
Now at a later time the queued work does:
phy_state_machine()
+---->netif_carrier_off(phydev->attached_dev): Oh no! It is NULL:
CPU 12 Unable to handle kernel paging request at virtual address
0000000000000048, epc == ffffffff80de37ec, ra == ffffffff80c7c
Oops[#1]:
CPU: 12 PID: 1502 Comm: kworker/12:1 Not tainted 4.9.43-Cavium-Octeon+ #1
Workqueue: events_power_efficient phy_state_machine
task: 80000004021ed100 task.stack: 8000000409d70000
$ 0 : 0000000000000000 ffffffff84720060 0000000000000048 0000000000000004
$ 4 : 0000000000000000 0000000000000001 0000000000000004 0000000000000000
$ 8 : 0000000000000000 0000000000000000 00000000ffff98f3 0000000000000000
$12 : 8000000409d73fe0 0000000000009c00 ffffffff846547c8 000000000000af3b
$16 : 80000004096bab68 80000004096babd0 0000000000000000 80000004096ba800
$20 : 0000000000000000 0000000000000000 ffffffff81090000 0000000000000008
$24 : 0000000000000061 ffffffff808637b0
$28 : 8000000409d70000 8000000409d73cf0 80000000271bd300 ffffffff80c7804c
Hi : 000000000000002a
Lo : 000000000000003f
epc : ffffffff80de37ec netif_carrier_off+0xc/0x58
ra : ffffffff80c7804c phy_state_machine+0x48c/0x4f8
Status: 14009ce3 KX SX UX KERNEL EXL IE
Cause : 00800008 (ExcCode 02)
BadVA : 0000000000000048
PrId : 000d9501 (Cavium Octeon III)
Modules linked in:
Process kworker/12:1 (pid: 1502, threadinfo=8000000409d70000,
task=80000004021ed100, tls=0000000000000000)
Stack : 8000000409a54000 80000004096bab68 80000000271bd300 80000000271c1e00
0000000000000000 ffffffff808a1708 8000000409a54000 80000000271bd300
80000000271bd320 8000000409a54030 ffffffff80ff0f00 0000000000000001
ffffffff81090000 ffffffff808a1ac0 8000000402182080 ffffffff84650000
8000000402182080 ffffffff84650000 ffffffff80ff0000 8000000409a54000
ffffffff808a1970 0000000000000000 80000004099e8000 8000000402099240
0000000000000000 ffffffff808a8598 0000000000000000 8000000408eeeb00
8000000409a54000 00000000810a1d00 0000000000000000 8000000409d73de8
8000000409d73de8 0000000000000088 000000000c009c00 8000000409d73e08
8000000409d73e08 8000000402182080 ffffffff808a84d0 8000000402182080
...
Call Trace:
[<ffffffff80de37ec>] netif_carrier_off+0xc/0x58
[<ffffffff80c7804c>] phy_state_machine+0x48c/0x4f8
[<ffffffff808a1708>] process_one_work+0x158/0x368
[<ffffffff808a1ac0>] worker_thread+0x150/0x4c0
[<ffffffff808a8598>] kthread+0xc8/0xe0
[<ffffffff808617f0>] ret_from_kernel_thread+0x14/0x1c
The original motivation for this change originated from Marc Gonzales
indicating that his network driver did not have its adjust_link callback
executing with phydev->link = 0 while he was expecting it.
PHYLIB has never made any such guarantees ever because phy_stop() merely just
tells the workqueue to move into PHY_HALTED state which will happen
asynchronously.
Reported-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reported-by: David Daney <ddaney.cavm@gmail.com>
Fixes: 7ad813f20853 ("net: phy: Correctly process PHY_HALTED in phy_stop_machine()")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The below lists of VOUT_MODE command readout with their related VID
protocols, Digital to Analog Converter steps, supported by the device:
VR12.0 mode, 5-mV DAC - 0x21
VR12.5 mode, 10-mV DAC - 0x22
VR13.0 mode, 10-mV DAC - 0x24
IMVP8 mode, 5-mV DAC - 0x25
VR13.0 mode, 5-mV DAC - 0x27
Signed-off-by: Vadim Pasternak <vadimp@mellanox.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
Mellanox, mlx5 fixes 2017-08-30
This series contains some misc fixes to the mlx5 driver.
Please pull and let me know if there's any problem.
For -stable:
Kernels >= 4.12
net/mlx5e: Fix CQ moderation mode not set properly
net/mlx5e: Don't override user RSS upon set channels
Kernels >= 4.11
net/mlx5e: Properly resolve TC offloaded ipv6 vxlan tunnel source address
Kernels >= 4.10
net/mlx5e: Fix DCB_CAP_ATTR_DCBX capability for DCBNL getcap
net/mlx5e: Check for qos capability in dcbnl_initialize
Kernels >= 4.9
net/mlx5e: Fix dangling page pointer on DMA mapping error
Kernels >= 4.8
net/mlx5e: Fix inline header size for small packets
net/mlx5: E-Switch, Unload the representors in the correct order
net/mlx5: Fix arm SRQ command for ISSI version 0
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Subdevices might depend on earlier registered subdevices for
communication purposes, as such they should be stopped in reverse order
so that said communication channel is removed after the dependent
subdevice is stopped.
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
The idr_lock should be released in the case that we don't find the given
channel.
Fixes: 44f6df922a26 ("rpmsg: glink: Fix idr_lock from mutex to spinlock")
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
Provide a basic driver to control Cortex M4 co-processor found
on NXP i.MX7D and i.MX6SX.
Currently it is able to resolve addresses between M4 and main CPU,
start and stop the co-processor. Other functionality is not provided
or test.
This driver was tested on NXP i.MX7D and expected to work on
i.MX6SX as well.
Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
BCM7278 has only 128 entries while BCM7445 has the full 256 entries set,
fix that.
Fixes: 7318166cacad ("net: dsa: bcm_sf2: Add support for ethtool::rxnfc")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
xennet_start_xmit() might copy skb with inappropriate layout
into a fresh one.
Old skb is freed, and at this point it is not a drop, but
a consume. New skb will then be either consumed or dropped.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Introduce a new flow table and indirect TIRs which are used to hash the
inner packet headers of GRE tunneled packets.
When a GRE tunneled packet is received, the TTC flow table will match
the new IPv4/6->GRE rules which will forward it to the inner TTC table.
The inner TTC is similar to its counterpart outer TTC table, but
matching the inner packet headers instead of the outer ones (and does
not include the new IPv4/6->GRE rules).
The new rules will not add steering hops since they are added to an
already existing flow group which will be matched regardless of this
patch. Non GRE traffic will not be affected.
The inner flow table will forward the packet to inner indirect TIRs
which hash the inner packet and thus result in RSS for the tunneled
packets.
Testing 8 TCP streams bandwidth over GRE:
System: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
NIC: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
Before: 21.3 Gbps (Single RQ)
Now : 90.5 Gbps (RSS spread on 8 RQs)
Signed-off-by: Gal Pressman <galp@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Add TX offloads support for GRE tunneled packets by reporting the needed
netdev features.
Signed-off-by: Gal Pressman <galp@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
This change adds the ability for flow steering to classify IPv4/6
packets with MPLS tag (Ethertype 0x8847 and 0x8848) as standard IP
packets and hit IPv4/6 classification steering rules.
Since IP packets with MPLS tag header have MPLS ethertype, they
missed the IPv4/6 ethertype rule and ended up hitting the default
filter forwarding all the packets to the same single RQ (No RSS).
Since our device is able to look past the MPLS tag and identify the
next protocol we introduce this solution which replaces ethertype
matching by the device's capability to perform IP version
identification and matching in order to distinguish between IPv4 and
IPv6.
Therefore, when driver is performing flow steering configuration on the
device it will use IP version matching in IP classified rules instead
of ethertype matching which will cause relevant MPLS tagged packets to
hit this rule as well.
If the device doesn't support IP version matching the driver will fall back
to use legacy ethertype matching in the steering as before.
Signed-off-by: Gal Pressman <galp@mellanox.com>
Signed-off-by: Ariel Levkovich <lariel@mellanox.com>
Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
|
|
Trivial fix to spelling mistake in DP_NOTICE message
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch removes the wrong check being done for the phy device being
returned by the mdiobus_get_phy() function. This function never returns
the error pointers.
Fixes: 256727da7395 ("net: hns3: Add MDIO support to HNS3 Ethernet
Driver for hip08 SoC")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Make this const as it is never modified.
Signed-off-by: Bhumika Goyal <bhumirks@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds logic to reconfigure the comphy/GoP/MAC when the link
state is updated at runtime. This is very useful on boards where many
link speed are supported: depending on what is negotiated the PPv2
driver will automatically reconfigures the link between the PHY and the
MAC.
Signed-off-by: Antoine Tenart <antoine.tenart@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When using the XLG MAC, it does not make sense to force the GMAC autoneg
parameters. This patch adds checks to only set the GMAC autoneg
parameters when needed (i.e. when not using the XLG MAC).
Signed-off-by: Antoine Tenart <antoine.tenart@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When the link status changes, the phylib calls the link_event function
in the mvpp2 driver. Before this patch only the egress/ingress transmit
was enabled/disabled. This patch adds more functionality to the link
status management code by enabling/disabling the port per-cpu
interrupts, and the port itself. The queues are now stopped as well, and
the netif carrier helpers are called.
Signed-off-by: Antoine Tenart <antoine.tenart@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The link_event function is somewhat complicated. This cosmetic patch
simplifies it.
Signed-off-by: Antoine Tenart <antoine.tenart@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
On some platforms, the comphy is between the MAC GoP and the PHYs. The
mvpp2 driver currently relies on the firmware/bootloader to configure
the comphy. As a comphy driver was added to the generic PHY framework,
this patch uses it in the mvpp2 driver to configure the comphy at boot
time to avoid relying on the bootloader.
Signed-off-by: Antoine Tenart <antoine.tenart@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
On the CP110 unit, which can be found on various Marvell platforms such
as the 7k and 8k (currently), a comphy (common PHYs) hardware block can
be found. This block provides a number of PHYs which can be used in
various modes by other controllers (network, SATA ...). These common
PHYs must be configured for the controllers using them to work correctly
either at boot time, or when the system runs to switch the mode used.
This patch adds a driver for this comphy hardware block, providing
callbacks for the its PHYs so that consumers can configure the modes
used.
As of this commit, two modes are supported by the comphy driver: sgmii
and 10gkr.
Signed-off-by: Antoine Tenart <antoine.tenart@free-electrons.com>
Acked-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
"Two fixes (a vmwgfx and core drm fix) in the queue for 4.13 final,
hopefully that is it"
* tag 'drm-fixes-for-v4.13-rc8' of git://people.freedesktop.org/~airlied/linux:
drm/vmwgfx: Fix F26 Wayland screen update issue
drm/bridge/sii8620: Fix memory corruption
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"Three minor fixes: a NULL deref in qedf, an off by one in sg and a fix
to IPR to prevent an error on initialisation"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
scsi: qedf: Fix a potential NULL pointer dereference
scsi: sg: off by one in sg_ioctl()
scsi: ipr: Set no_report_opcodes for RAID arrays
|
|
We should not hold a spinlock while pushing the skb into the networking
stack, so move the call to netif_rx_ni out of the critical region to where
we have dropped the spinlock.
Signed-off-by: Stefan Sørensen <stefan.sorensen@spectralink.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
VMD child devices must use the VMD endpoint's ID as the requester. Because
of this, there needs to be a way to link the parent VMD endpoint's IOMMU
group and associated mappings to the VMD child devices such that attaching
and detaching child devices modify the endpoint's mappings, while
preventing early detaching on a singular device removal or unbinding.
The reassignment of individual VMD child devices devices to VMs is outside
the scope of VMD, but may be implemented in the future. For now it is best
to prevent any such attempts.
Prevent VMD child devices from returning an IOMMU, which prevents it from
exposing an iommu_group sysfs directory and allowing subsequent binding by
userspace-access drivers such as VFIO.
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
VMD currently only exists for Intel x86 products, so move the VMD quirk to
arch/x86.
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
VMD hardware has to share its vectors among child devices in its PCI
domain so we should allocate as many as possible rather than just ones
that can be affinitized.
pci_alloc_irq_vectors_affinity() limits the number of affinitized IRQs to
the number of present CPUs (see irq_calc_affinity_vectors()). But we'd
prefer to have more vectors, even if they aren't distributed across the
CPUs, so use pci_alloc_irq_vectors() instead.
Reported-by: Brad Goodman <Bradley.Goodman@dell.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
[bhelgaas: add irq_calc_affinity_vectors() reference to changelog]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
When the user unbinds the last device of a group from a vfio bus
driver, the devices within that group should be available for other
purposes. We currently have a race that makes this generally, but
not always true. The device can be unbound from the vfio bus driver,
but remaining IOMMU context of the group attached to the container
can result in errors as the next driver configures DMA for the device.
Wait for the group to be detached from the IOMMU backend before
allowing the bus driver remove callback to complete.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
In vfio_iommu_group_get() we want to increase the reference
count of the iommu group.
In noiommu case, the group does not exist and is allocated.
iommu_group_add_device() increases the group ref count. However we
then call iommu_group_put() which decrements it.
This leads to a "refcount_t: underflow WARN_ON".
Only decrement the ref count in case of iommu_group_add_device
failure.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|