Age | Commit message (Collapse) | Author |
|
When the watchdog decides to panic, it takes the lock and double
checks everything (to avoid races with the CPU being unstuck or
panic()ed by something else).
The exit label was misplaced and would result in all-CPUs backtrace
and watchdog panic even in the case that the condition was found to be
resolved.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Some code can go into a tight loop calling touch_nmi_watchdog (e.g.,
stop_machine CPU hotplug code). This can cause contention on watchdog
locks particularly if all CPUs with watchdog enabled are spinning in
the loops.
Avoid this storm of activity by running the watchdog timer callback
from this path if we have exceeded the timer period since it was last
run.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
- Hard-disable interrupts before taking the lock, which prevents
soft-NMI re-entrancy and therefore can prevent deadlocks.
- Use raw_ variants of local_irq_disable to avoid irq debugging.
- When the lock is contended, spin at low SMT priority, using
loads only, and with interrupts enabled (where possible).
Some stalls have been noticed at high loads that go away with improved
locking. There should not be so much locking contention in the first
place (which is addressed in a subsequent patch), but locking should
still be improved.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
When the NMI IPI lock is contended, spin at low SMT priority, using
loads only, and with interrupts enabled (where possible). This
improves behaviour under high contention (e.g., a system crash when
a number of CPUs are trying to enter the debugger).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
In commit 05a4a9527931 ("kernel/watchdog: split up config options"),
CONFIG_LOCKUP_DETECTOR was split into two separate config options,
CONFIG_HARDLOCKUP_DETECTOR and CONFIG_SOFTLOCKUP_DETECTOR.
Our defconfigs still have CONFIG_LOCKUP_DETECTOR=y, but that is no longer
user selectable, and we don't mention the new options, so we end up with
none of them enabled.
So update the defconfigs to turn on the new SOFT and HARD options, the
end result being the same as what we had previously.
Fixes: 05a4a9527931 ("kernel/watchdog: split up config options")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
Add a wait/retry version of ibnl_unicast, ibnl_unicast_wait,
and modify ibnl_unicast to not wait/retry. This eliminates
the undesirable wait for future users of ibnl_unicast.
Change Portmapper calls originating from kernel to user-space
to use ibnl_unicast_wait and take advantage of the wait/retry
logic in netlink_unicast.
Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com>
Signed-off-by: Chien Tin Tung <chien.tin.tung@intel.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
|
|
It was reported that the sha1 AVX2 function(sha1_transform_avx2) is
reading ahead beyond its intended data, and causing a crash if the next
block is beyond page boundary:
http://marc.info/?l=linux-crypto-vger&m=149373371023377
This patch makes sure that there is no overflow for any buffer length.
It passes the tests written by Jan Stancek that revealed this problem:
https://github.com/jstancek/sha1-avx2-crash
I have re-enabled sha1-avx2 by reverting commit
b82ce24426a4071da9529d726057e4e642948667
Cc: <stable@vger.kernel.org>
Fixes: b82ce24426a4 ("crypto: sha1-ssse3 - Disable avx2")
Originally-by: Ilya Albrekht <ilya.albrekht@intel.com>
Tested-by: Jan Stancek <jstancek@redhat.com>
Signed-off-by: Megha Dey <megha.dey@linux.intel.com>
Reported-by: Jan Stancek <jstancek@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
In commit 0f987e25cb8a, the source processing has been moved in front of
the destination processing, but the error handling path has not been
modified accordingly.
Free resources in the correct order to avoid some leaks.
Cc: <stable@vger.kernel.org>
Fixes: 0f987e25cb8a ("crypto: ixp4xx - Fix false lastlen uninitialised warning")
Reported-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
|
|
Fix lockdep splat introduced in v4.13-rc4.
[ 266.297226] ------------[ cut here ]------------
[ 266.300078] WARNING: CPU: 2 PID: 176 at /mnt/src/jaja/git/tf300t/include/linux/blkdev.h:657 mmc_blk_remove_req+0xd0/0xe8 [mmc_block]
[ 266.302937] Modules linked in: mmc_block(-) sdhci_tegra sdhci_pltfm sdhci pwrseq_simple pwrseq_emmc mmc_core
[ 266.305941] CPU: 2 PID: 176 Comm: rmmod Tainted: G W 4.13.0-rc4mq-00208-gb691e67724b8-dirty #694
[ 266.308852] Hardware name: NVIDIA Tegra SoC (Flattened Device Tree)
[ 266.311719] [<b011144c>] (unwind_backtrace) from [<b010ca54>] (show_stack+0x18/0x1c)
[ 266.314664] [<b010ca54>] (show_stack) from [<b062e3f4>] (dump_stack+0x84/0x98)
[ 266.317644] [<b062e3f4>] (dump_stack) from [<b01214f4>] (__warn+0xf4/0x10c)
[ 266.320542] [<b01214f4>] (__warn) from [<b01215d4>] (warn_slowpath_null+0x28/0x30)
[ 266.323534] [<b01215d4>] (warn_slowpath_null) from [<af067858>] (mmc_blk_remove_req+0xd0/0xe8 [mmc_block])
[ 266.326568] [<af067858>] (mmc_blk_remove_req [mmc_block]) from [<af068f40>] (mmc_blk_remove_parts.constprop.6+0x50/0x64 [mmc_block])
[ 266.329678] [<af068f40>] (mmc_blk_remove_parts.constprop.6 [mmc_block]) from [<af0693b8>] (mmc_blk_remove+0x24/0x140 [mmc_block])
[ 266.332894] [<af0693b8>] (mmc_blk_remove [mmc_block]) from [<af0052ec>] (mmc_bus_remove+0x20/0x28 [mmc_core])
[ 266.336198] [<af0052ec>] (mmc_bus_remove [mmc_core]) from [<b046aa64>] (device_release_driver_internal+0x164/0x200)
[ 266.339367] [<b046aa64>] (device_release_driver_internal) from [<b046ab54>] (driver_detach+0x40/0x74)
[ 266.342537] [<b046ab54>] (driver_detach) from [<b046982c>] (bus_remove_driver+0x68/0xdc)
[ 266.345660] [<b046982c>] (bus_remove_driver) from [<af06ad40>] (mmc_blk_exit+0xc/0x2cc [mmc_block])
[ 266.348875] [<af06ad40>] (mmc_blk_exit [mmc_block]) from [<b01aee30>] (SyS_delete_module+0x1c4/0x254)
[ 266.352068] [<b01aee30>] (SyS_delete_module) from [<b0108480>] (ret_fast_syscall+0x0/0x34)
[ 266.355308] ---[ end trace f68728a0d3053b72 ]---
Fixes: 7c84b8b43d3d ("mmc: block: bypass the queue even if usage is present for hotplug")
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Reviewed-by: Shawn Lin <shawn.lin@rock-chips.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
snd_pcm_ops are not supposed to change at runtime. All functions
working with snd_pcm_ops provided by <sound/pcm.h> work with
const snd_pcm_ops. So mark the non-const structs as const.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Archit Taneja <architt@codeaurora.org>
Link: https://patchwork.freedesktop.org/patch/msgid/44c0ccab8b5658d17a3ed553e721136b53d95521.1502264156.git.arvind.yadav.cs@gmail.com
|
|
Make these structures const as they are only stored in the funcs field
of drm_bridge structure, which is of type const.
Done using Coccinelle.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Bhumika Goyal <bhumirks@gmail.com>
Signed-off-by: Archit Taneja <architt@codeaurora.org>
Link: https://patchwork.freedesktop.org/patch/msgid/1502207650-20029-1-git-send-email-bhumirks@gmail.com
|
|
When an RX block-ack session times out, the firmware, which offloads
RX reordering but not the BA session negotiation, stops the session
but doesn't send a DELBA. This causes the the session to remain
active in the remote device, so no more BA sessions will be
established, causing a severe throughput degradation due to the lack
of aggregation.
Use the new ieee80211_rx_ba_timer_expired API when the ba session timer
expires, since this will tear down the ba session and also send a delba.
The previous API used is intended for drivers that offload the
addba/delba negotiation, but not the rx reordering, while our driver
does the opposite.
This patch depends on "mac80211: add api to start ba session timer
expired flow".
Signed-off-by: Naftali Goldstein <naftali.goldstein@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Some drivers handle rx buffer reordering internally (and by extension
handle also the rx ba session timer internally), but do not ofload the
addba/delba negotiation.
Add an api for these drivers to properly tear-down the ba session,
including sending a delba.
Signed-off-by: Naftali Goldstein <naftali.goldstein@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
of_irq_get() may return 0 as well as negative error number on failure,
while the driver only checks for the negative values. The driver would then
call request_irq(0, ...) in tegra_adma_alloc_chan_resources() and never get
valid channel interrupt.
Check for 'tdc->irq <= 0' instead and return -ENXIO from the driver's probe
iff of_irq_get() returned 0.
Fixes: f46b195799b5 ("dmaengine: tegra-adma: Add support for Tegra210 ADMA")
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Acked-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
When we start an Rx A-MPDU session, we first get the AddBA
request, then we send an ADD_STA command to the firmware
that will reply with a BAID which is a hardware resource
that tracks the BA session.
This BAID will appear on each and every frame that we get
from the firwmare until the A-MPDU session is torn down.
In the Rx path, we look at this BAID to manage the
reordering buffer.
This flow is inherently racy since the hardware will start
to put the BAID in the frames it receives even if the
firmware hasn't sent the response to the ADD_STA command.
This basically means that the driver can get frames with
a valid BAID that it doesn't know yet.
When that happens, the driver used to WARN.
Fix this by simply not WARN in this case. When the driver
will know abou the BAID, it will initialise the relevant
states and the next frame with a valid BAID will refresh
them.
Fixes: b915c10174fb ("iwlwifi: mvm: add reorder buffer per queue")
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
In AP mode, if a station is removed just as it is adding a new stream,
the queue in question will remain stopped and no more TX will happen
in this queue, leading to connection failures and other problems.
This is because under DQA, when tx is deferred because a queue needs
to be allocated, the mac queue for that TID is stopped until the new
stream is added. If at this point the station that this stream
belongs to is removed, all the deferred tx frames are purged, but the
mac queue is not restarted. As a result, all following tx on this
queue will not be transmitted.
Fix this by starting the relevant mac queues when the deferred tx
frames are purged.
Fixes: 24afba7690e4 ("iwlwifi: mvm: support bss dynamic alloc/dealloc of queues")
Signed-off-by: Avraham Stern <avraham.stern@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
skb_warn_bad_offload triggers a warning when an skb enters the GSO
stack at __skb_gso_segment that does not have CHECKSUM_PARTIAL
checksum offload set.
Commit b2504a5dbef3 ("net: reduce skb_warn_bad_offload() noise")
observed that SKB_GSO_DODGY producers can trigger the check and
that passing those packets through the GSO handlers will fix it
up. But, the software UFO handler will set ip_summed to
CHECKSUM_NONE.
When __skb_gso_segment is called from the receive path, this
triggers the warning again.
Make UFO set CHECKSUM_UNNECESSARY instead of CHECKSUM_NONE. On
Tx these two are equivalent. On Rx, this better matches the
skb state (checksum computed), as CHECKSUM_NONE here means no
checksum computed.
See also this thread for context:
http://patchwork.ozlabs.org/patch/799015/
Fixes: b2504a5dbef3 ("net: reduce skb_warn_bad_offload() noise")
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
qmi_wwan_disconnect is called twice when disconnecting devices with
separate control and data interfaces. The first invocation will set
the interface data to NULL for both interfaces to flag that the
disconnect has been handled. But the matching NULL check was left
out when qmi_wwan_disconnect was added, resulting in this oops:
usb 2-1.4: USB disconnect, device number 4
qmi_wwan 2-1.4:1.6 wwp0s29u1u4i6: unregister 'qmi_wwan' usb-0000:00:1d.0-1.4, WWAN/QMI device
BUG: unable to handle kernel NULL pointer dereference at 00000000000000e0
IP: qmi_wwan_disconnect+0x25/0xc0 [qmi_wwan]
PGD 0
P4D 0
Oops: 0000 [#1] SMP
Modules linked in: <stripped irrelevant module list>
CPU: 2 PID: 33 Comm: kworker/2:1 Tainted: G E 4.12.3-nr44-normandy-r1500619820+ #1
Hardware name: LENOVO 4291LR7/4291LR7, BIOS CBET4000 4.6-810-g50522254fb 07/21/2017
Workqueue: usb_hub_wq hub_event [usbcore]
task: ffff8c882b716040 task.stack: ffffb8e800d84000
RIP: 0010:qmi_wwan_disconnect+0x25/0xc0 [qmi_wwan]
RSP: 0018:ffffb8e800d87b38 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000001 RSI: ffff8c8824f3f1d0 RDI: ffff8c8824ef6400
RBP: ffff8c8824ef6400 R08: 0000000000000000 R09: 0000000000000000
R10: ffffb8e800d87780 R11: 0000000000000011 R12: ffffffffc07ea0e8
R13: ffff8c8824e2e000 R14: ffff8c8824e2e098 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff8c8835300000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000000000e0 CR3: 0000000229ca5000 CR4: 00000000000406e0
Call Trace:
? usb_unbind_interface+0x71/0x270 [usbcore]
? device_release_driver_internal+0x154/0x210
? qmi_wwan_unbind+0x6d/0xc0 [qmi_wwan]
? usbnet_disconnect+0x6c/0xf0 [usbnet]
? qmi_wwan_disconnect+0x87/0xc0 [qmi_wwan]
? usb_unbind_interface+0x71/0x270 [usbcore]
? device_release_driver_internal+0x154/0x210
Reported-and-tested-by: Nathaniel Roach <nroach44@gmail.com>
Fixes: c6adf77953bc ("net: usb: qmi_wwan: add qmap mux protocol support")
Cc: Daniele Palmas <dnlplm@gmail.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Commit e5dadc65f9e0 ("ppp: Fix false xmit recursion detect with two ppp
devices") dropped the xmit_recursion counter incrementation in
ppp_channel_push() and relied on ppp_xmit_process() for this task.
But __ppp_channel_push() can also send packets directly (using the
.start_xmit() channel callback), in which case the xmit_recursion
counter isn't incremented anymore. If such packets get routed back to
the parent ppp unit, ppp_xmit_process() won't notice the recursion and
will call ppp_channel_push() on the same channel, effectively creating
the deadlock situation that the xmit_recursion mechanism was supposed
to prevent.
This patch re-introduces the xmit_recursion counter incrementation in
ppp_channel_push(). Since the xmit_recursion variable is now part of
the parent ppp unit, incrementation is skipped if the channel doesn't
have any. This is fine because only packets routed through the parent
unit may enter the channel recursively.
Finally, we have to ensure that pch->ppp is not going to be modified
while executing ppp_channel_push(). Instead of taking this lock only
while calling ppp_xmit_process(), we now have to hold it for the full
ppp_channel_push() execution. This respects the ppp locks ordering
which requires locking ->upl before ->downl.
Fixes: e5dadc65f9e0 ("ppp: Fix false xmit recursion detect with two ppp devices")
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In commit 7e3f2952eeb1 ("rds: don't let RDS shutdown a connection
while senders are present"), refilling the receive queue was removed
from rds_ib_recv(), along with the increment of
s_ib_rx_refill_from_thread.
Commit 73ce4317bf98 ("RDS: make sure we post recv buffers")
re-introduces filling the receive queue from rds_ib_recv(), but does
not add the statistics counter. rds_ib_recv() was later renamed to
rds_ib_recv_path().
This commit reintroduces the statistics counting of
s_ib_rx_refill_from_thread and s_ib_rx_refill_from_cq.
Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
Reviewed-by: Knut Omang <knut.omang@oracle.com>
Reviewed-by: Wei Lin Guay <wei.lin.guay@oracle.com>
Reviewed-by: Shamir Rabinovitch <shamir.rabinovitch@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
With new TCP_FASTOPEN_CONNECT socket option, there is a possibility
to call tcp_connect() while socket sk_dst_cache is either NULL
or invalid.
+0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 4
+0 fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0
+0 setsockopt(4, SOL_TCP, TCP_FASTOPEN_CONNECT, [1], 4) = 0
+0 connect(4, ..., ...) = 0
<< sk->sk_dst_cache becomes obsolete, or even set to NULL >>
+1 sendto(4, ..., 1000, MSG_FASTOPEN, ..., ...) = 1000
We need to refresh the route otherwise bad things can happen,
especially when syzkaller is running on the host :/
Fixes: 19f6d3f3c8422 ("net/tcp-fastopen: Add new API support")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Wei Wang <weiwan@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Acked-by: Wei Wang <weiwan@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now xt_tgchk_param par in ipt_init_target is a local varibale,
par.net is not initialized there. Later when xt_check_target
calls target's checkentry in which it may access par.net, it
would cause kernel panic.
Jaroslav found this panic when running:
# ip link add TestIface type dummy
# tc qd add dev TestIface ingress handle ffff:
# tc filter add dev TestIface parent ffff: u32 match u32 0 0 \
action xt -j CONNMARK --set-mark 4
This patch is to pass net param into ipt_init_target and set
par.net with it properly in there.
v1->v2:
As Wang Cong pointed, I missed ipt_net_id != xt_net_id, so fix
it by also passing net_id to __tcf_ipt_init.
v2->v3:
Missed the fixes tag, so add it.
Fixes: ecb2421b5ddf ("netfilter: add and use nf_ct_netns_get/put")
Reported-by: Jaroslav Aster <jaster@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Manually adjust the port settings of user ports once PHY polling has
completed. This patch extends the adjust_link callback to configure the
per port PMCR register, applying the proper values polled from the PHY.
Without this patch flow control was not always getting setup properly.
Signed-off-by: Shashidhar Lakkavalli <shashidhar.lakkavalli@openmesh.com>
Signed-off-by: Muciri Gatimu <muciri@openmesh.com>
Signed-off-by: John Crispin <john@phrozen.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
if the NIC fails to validate the checksum on TCP/UDP, and validation of IP
checksum is successful, the driver subtracts the pseudo-header checksum
from the value obtained by the hardware and sets CHECKSUM_COMPLETE. Don't
do that if protocol is IPPROTO_SCTP, otherwise CRC32c validation fails.
V2: don't test MLX4_CQE_STATUS_IPV6 if MLX4_CQE_STATUS_IPV4 is set
Reported-by: Shuang Li <shuali@redhat.com>
Fixes: f8c6455bb04b ("net/mlx4_en: Extend checksum offloading by CHECKSUM COMPLETE")
Signed-off-by: Davide Caratti <dcaratti@redhat.com>
Acked-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add a check if the framebuffer described by the provided drm_mode_fb_cmd2
structure fits into provided GEM buffers. Without this check it is
possible to create a framebuffer object from a small buffer and set it to
the hardware, what results in displaying system memory outside the
allocated GEM buffer.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Tobias Jakobi <tjakobi@math.uni-bielefeld.de>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
|
|
The client was freeing the nfs4_ff_layout_ds, but not the contained
nfs4_ff_ds_version array.
Signed-off-by: Weston Andros Adamson <dros@primarydata.com>
Cc: stable@vger.kernel.org # v4.0+
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
|
|
This is useful to allow GL to provide defined results for overlapping
glBlitFramebuffer, which X11 in turn uses to accelerate uncomposited
window movement without first blitting to a temporary. x11perf
-copywinwin100 goes from 1850/sec to 4850/sec.
v2: Default to the same behavior as before when the flags aren't
passed. (suggested by Boris)
Signed-off-by: Eric Anholt <eric@anholt.net>
Link: https://patchwork.freedesktop.org/patch/msgid/20170725162733.28007-2-eric@anholt.net
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
Userspace shouldn't be able to spam dmesg by passing bad arguments.
This has particularly become an issues since we started using a bad
argument to set_tiling to detect if set_tiling was supported.
Signed-off-by: Eric Anholt <eric@anholt.net>
Fixes: 83753117f1de ("drm/vc4: Add get/set tiling ioctls.")
Link: https://patchwork.freedesktop.org/patch/msgid/20170725162733.28007-1-eric@anholt.net
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
drm_*_reference() and drm_*_unreference() functions are just
compatibility alias for drm_*_get() and drm_*_put() adn should not be
used by new code. So convert all users of compatibility functions to use
the new APIs.
Signed-off-by: Cihangir Akturk <cakturk@gmail.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Link: https://patchwork.freedesktop.org/patch/msgid/1501761585-11757-26-git-send-email-cakturk@gmail.com
Reviewed-by: Eric Anholt <eric@anholt.net>
|
|
drm_bridge_remove() is for unregistering a bridge driver, not for
detaching a bridge from its consumer.
Fixes: 656fa22f9cea ("drm/vc4: Switch DSI to the panel-bridge layer, and support bridges.")
Signed-off-by: Eric Anholt <eric@anholt.net>
Link: https://patchwork.freedesktop.org/patch/msgid/20170802203242.12815-3-eric@anholt.net
Acked-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
The clocks are enabled/disabled at encoder enable/disable time, not at
component load. Fixes a WARN_ON at boot if V3D fails to probe.
Fixes: 4078f5757144 ("drm/vc4: Add DSI driver")
Signed-off-by: Eric Anholt <eric@anholt.net>
Link: https://patchwork.freedesktop.org/patch/msgid/20170802203242.12815-2-eric@anholt.net
Acked-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
It's also destroyed from the top level vc4_drv.c initialization, which
is where the cache was actually initialized from.
This used to just involve duplicate del_timer() and cancel_work_sync()
being called, but it started causing kmalloc issues once we
double-freed the new BO label array.
Fixes: 1908a876f909 ("drm/vc4: Add an ioctl for labeling GEM BOs for summary stats")
Signed-off-by: Eric Anholt <eric@anholt.net>
Link: https://patchwork.freedesktop.org/patch/msgid/20170802203242.12815-1-eric@anholt.net
Tested-by: Noralf Trønnes <noralf@tronnes.org>
Acked-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
|
|
DDI_E is not supported on CNL-U and CNL-Y
When adding the initial support we noticed DDI_E wasn't supported
and removed it on v4 and v5 of that patch.
However for some reason I missed or put back these 2 chunks.
Time to clean it up to avoid later confusion.
Fixes: 8bcd3dd41766 ("drm/i915/cnl: Add power wells for CNL")
Cc: Clint Taylor <clinton.a.taylor@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20170808193237.17410-1-rodrigo.vivi@intel.com
|
|
Use the generic block layer affinity mapping helper. Also,
limit nr_hw_queues to the rdma device number of irq vectors
as we don't really need more.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Like pci and virtio, we add a rdma helper for affinity
spreading. This achieves optimal mq affinity assignments
according to the underlying rdma device affinity maps.
Reviewed-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Simply refer to the generic affinity mask helper.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
This will allow ULPs to intelligently locate threads based
on completion vector cpu affinity mappings. In case the
driver does not expose a get_vector_affinity callout, return
NULL so the caller can maintain a fallback logic.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Acked-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
generic api takes care of spreading affinity similar to
what mlx5 open coded (and even handles better asymmetric
configurations). Ask the generic API to spread affinity
for us, and feed him pre_vectors that do not participate
in affinity settings (which is an improvement to what we
had before).
The affinity assignments should match what mlx5 tried to
do earlier but now we do not set affinity to async, cmd
and pages dedicated vectors.
Also, remove mlx5e_get_cpu and introduce mlx5e_get_node
(used for allocation purposes) and mlx5_get_vector_affinity
(for indirection table construction) as they provide the needed
information. Luckily, we have generic helpers to get cpumask
and node given a irq vector. mlx5_get_vector_affinity will
be used by mlx5_ib in a subsequent patch.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
mlx5e currently assumes that irq affinity is really spread first
irq vectors across device home node cpus, with the new generic affinity
mappings this is no longer the case, hence mlxe should not rely on
this anymore.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Now that we have a generic code to allocate an array
of irq vectors and even correctly spread their affinity,
correctly handle cpu hotplug events and more, were much
better off using it.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
If extended LIDs are being used, a connection request contains
OPA GIDs in them. Extract the lids from the OPA gids and populate
slid/dlid fields in the path records that are created when handling
a connection request.
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Reviewed-by: Don Hiatt <don.hiatt@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
When handling an incoming conection request, ib_cm creates
either an IB or an OPA path record based on the gid field
in the request.
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Reviewed-by: Don Hiatt <don.hiatt@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
Add OPA path record support to the Connection Manager.
Signed-off-by: Don Hiatt <don.hiatt@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
slid field in struct ib_wc is increased to 32 bits.
This enables core components to use larger LIDs if needed.
The user ABI is unchanged and return 16 bit values when queried.
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Don Hiatt <don.hiatt@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
sm_lid field in struct ib_port_attr is increased to 32 bits. This
enables core components to use larger LIDs if needed.
The user ABI is unchanged and return 16 bit values when queried.
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Don Hiatt <don.hiatt@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
lid field in struct ib_port_attr is increased to 32 bits. This enables core
components to use larger LIDs if needed.
The user ABI is unchanged and return 16 bit values when queried.
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Don Hiatt <don.hiatt@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
MAD RMPP contains slid field which is 16 bits in
length, increase it to 32 bits.
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Don Hiatt <don.hiatt@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
IPoIB contains local_lid field which is 16 bits in
length, increase it to 32 bits.
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Don Hiatt <don.hiatt@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
srpt contains lid and sm_lid fields which are 16 bits in
length, increase them to 32 bits.
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Don Hiatt <don.hiatt@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|
|
OPA address handle atttibutes that have 32 bit LIDs would have to
be converted to IB address handle attribute with the LID field
programmed in the GID before copying to user space.
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Reviewed-by: Don Hiatt <don.hiatt@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
|