summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-03-28spi: sh-msiof: Document R-Car M3-N supportGeert Uytterhoeven
Document support for the MSIOF module in the Renesas M3-N (r8a77965) SoC. No driver update is needed. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Mark Brown <broonie@kernel.org>
2018-03-28ASoC: cpcap: replace codec to componentKuninori Morimoto
Now we can replace Codec to Component. Let's do it. Note: xxx_codec_xxx() -> xxx_component_xxx() .idle_bias_off = 0 -> .idle_bias_on = 1 .ignore_pmdown_time = 0 -> .use_pmdown_time = 1 - -> .endianness = 1 - -> .non_legacy_dai_naming = 1 Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2018-03-28ASoC: Intel: bytcr_rt5651: don't use codec anymoreKuninori Morimoto
commit aeec6cc08215 ("ASoC: Intel: bytcr_rt5651: Configure PLL1 before using it") is using codec->dev, but codec is replaced to component. Let's use component Fixes: aeec6cc08215 ("ASoC: Intel: bytcr_rt5651: Configure PLL1 before using it") Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2018-03-28ASoC: amd: don't use codec anymoreKuninori Morimoto
commit c88d31153356 ("ASoC: amd: Enable da7219 master clock using common clock framework") is using rtd->codec, but codec is replaced to component. Let's use component Fixes: c88d31153356 ("ASoC: amd: Enable da7219 master clock using common clock framework") Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2018-03-28regulator: qcom: smd: Add pm8998 and pmi8998 regulatorsBjorn Andersson
Add the pm8998 and pmi8998 regulators as used in the MSM8998 platform. Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Mark Brown <broonie@kernel.org>
2018-03-27net/mlx5e: Recover Send Queue (SQ) from error stateEran Ben Elisha
An error TX completion (CQE) which arrived on a specific SQ indicates that this SQ got moved by the hardware to error state, which means all pending and incoming TX requests are dropped or will be dropped and no further "Good" CQEs will be generated for that SQ. Before this patch TX completions (CQEs) were not monitored and were handled as a regular CQE. This caused the SQ to stay in an error state, making it useless for xmiting new packets. Mitigation plan: In case of an error completion, schedule a recovery work which would do the following: - Mark the TXQ as DRV_XOFF to disable new packets to arrive from the stack - NAPI to flush all pending SQ WQEs (via flush_in_error_en bit) to release SW and HW resources(SKB, DMA, etc) and have the SQ and CQ consumer/producer indices synced. - Modify the SQ state ERR -> RST -> RDY (restart the SQ). - Reactivate the SQ and reset SQ cc and pc If we identify two consecutive requests for SQ recover in less than 500 msecs, drop the recover request to avoid CPU overload, as this scenario most likely happened due to a severe repeated bug. In addition, add SQ recover SW counter to monitor successful recoveries. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27Merge branch 'fixes' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM fixes from Russell King: "A small number of small fixes for ARM, mostly for some build issues. One fix for a regression caused by the cpu hotplug conversion from a few kernel versions ago" * 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 8750/1: deflate_xip_data.sh: minor fixes ARM: 8748/1: mm: Define vdso_start, vdso_end as array ARM: 8747/1: make CONFIG_DEBUG_WX depend on MMU ARM: 8746/1: vfp: Go back to clearing vfp_current_hw_state[]
2018-03-27net/mlx5e: Dump xmit error completionsEran Ben Elisha
Monitor and dump xmit error completions. In addition, add err_cqe counter to track the number of error completion per send queue. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27mlx5: Move dump error CQE function out of mlx5_ib for code sharingEran Ben Elisha
Move mlx5_ib dump error CQE implementation to mlx5 CQ header file in order to use it in a downstream patch from mlx5e. In addition, use print_hex_dump instead of manual dumping of the buffer. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27mlx5_{ib,core}: Add query SQ state helper functionEran Ben Elisha
Move query SQ state function from mlx5_ib to mlx5_core in order to have it in shared code. It will be used in a downstream patch from mlx5e. Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Move all TX timeout logic to be under state lockEran Ben Elisha
Driver callback for handling TX timeout should access some internal resources (SQ, CQ) in order to decide if the tx timeout work should be scheduled. These resources might be unavailable if channels are closed in parallel (ifdown for example). The state lock is the mechanism to protect from such races. Move all TX timeout logic to be in the work under a state lock. In addition, Move the work from the global WQ to mlx5e WQ to make sure this work is flushed when device is detached.. Also, move the mlx5e_tx_timeout_work code to be next to the TX timeout NDO for better code locality. Fixes: 3947ca185999 ("net/mlx5e: Implement ndo_tx_timeout callback") Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Remove unused max inline related codeGal Pressman
Commit 58d522912ac7 ("net/mlx5e: Support TX packet copy into WQE") introduced the max inline WQE as an ethtool tunable. One commit later, that functionality was made dependent on BlueFlame. Commit 6982ab609768 ("net/mlx5e: Xmit, no write combining") removed BlueFlame support, and with it the max inline WQE. This patch cleans up the leftovers from the removed feature. Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Add ethtool priv-flag for Striding RQTariq Toukan
Add a control private flag in ethtool to enable/disable Striding RQ feature. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Do not reset Receive Queue params on every type changeTariq Toukan
Do not implicit a call to mlx5e_init_rq_type_params() upon every change in RQ type. It should be called only on channels creation. Fixes: 2fc4bfb7250d ("net/mlx5e: Dynamic RQ type infrastructure") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Remove rq_headroom field from paramsTariq Toukan
It can be derived from other params, calculate it via the dedicated function when needed. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Remove RQ MPWQE fields from paramsTariq Toukan
Introduce functions to calculate them when needed. They can be derived from other params. This will simplify transition between RQ configurations. In general, any parameter that is not explicitly set or controlled, but derived from other parameters, should not have a control-path field itself, but a getter function. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Use no-offset function in skb header copyTariq Toukan
In copying skb header to skb->data, replace the call to skb_copy_to_linear_data_offset() with a zero offset with the call to the no-offset function skb_copy_to_linear_data(). Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Separate dma base address and offset in dma_sync callTariq Toukan
Pass the base dma address and offset to dma_sync_single_range_for_cpu(), instead of doing the pre-calculation. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Remove unused define MLX5_MPWRQ_STRIDES_PER_PAGETariq Toukan
Clean it up as it's not in use. Fixes: d9d9f156f380 ("net/mlx5e: Expand WQE stride when CQE compression is enabled") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Disable Striding RQ when PCI is slower than linkTariq Toukan
We turn the feature off for servers with PCI BW bounded by a threshold (16G) and lower than MAX LINK BW. This improves the effectiveness of CQE compression feature, that is defaulted to ON for the same case. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27net/mlx5e: Unify slow PCI heuristicTariq Toukan
Get the link/pci speed query and logic into a single function. Unify the heuristics and use a single PCI threshold (16G) for all. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-03-27Merge tag 'scsi-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi Pull SCSI fixes from James Bottomley: "Two driver fixes (ibmvfc, iscsi_tcp) and a USB fix for devices that give the wrong return to Read Capacity and cause a huge log spew. The remaining five patches all try to fix commit 84676c1f21e8 ("genirq/affinity: assign vectors to all possible CPUs") which broke the non-mq I/O path" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: iscsi_tcp: set BDI_CAP_STABLE_WRITES when data digest enabled scsi: sd: Remember that READ CAPACITY(16) succeeded scsi: ibmvfc: Avoid unnecessary port relogin scsi: virtio_scsi: unify scsi_host_template scsi: virtio_scsi: fix IO hang caused by automatic irq vector affinity scsi: core: introduce force_blk_mq scsi: megaraid_sas: fix selection of reply queue scsi: hpsa: fix selection of reply queue
2018-03-27rxrpc: Trace call completionDavid Howells
Add a tracepoint to track rxrpc calls moving into the completed state and to log the completion type and the recorded error value and abort code. Signed-off-by: David Howells <dhowells@redhat.com>
2018-03-27rxrpc, afs: Use debug_ids rather than pointers in tracesDavid Howells
In rxrpc and afs, use the debug_ids that are monotonically allocated to various objects as they're allocated rather than pointers as kernel pointers are now hashed making them less useful. Further, the debug ids aren't reused anywhere nearly as quickly. In addition, allow kernel services that use rxrpc, such as afs, to take numbers from the rxrpc counter, assign them to their own call struct and pass them in to rxrpc for both client and service calls so that the trace lines for each will have the same ID tag. Signed-off-by: David Howells <dhowells@redhat.com>
2018-03-27rxrpc: Trace resendDavid Howells
Add a tracepoint to trace packet resend events and to dump the Tx annotation buffer for added illumination. Signed-off-by: David Howells <dhowells@rdhat.com>
2018-03-27RDMA/hns: ensure for-loop actually iterates and free's buffersColin Ian King
The current for-loop zeros variable i and only loops once, hence not all the buffers are free'd. Fix this by setting i correctly. Detected by CoverityScan, CID#1463415 ("Operands don't affect result") Fixes: a5073d6054f7 ("RDMA/hns: Add eq support of hip08") Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Yixian Liu <liuyixian@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-03-27ipc/smack: Tidy up from the change in type of the ipc security hooksEric W. Biederman
Rename the variables shp, sma, msq to isp. As that is how the code already refers to those variables. Collapse smack_of_shm, smack_of_sem, and smack_of_msq into smack_of_ipc, as the three functions had become completely identical. Collapse smack_shm_alloc_security, smack_sem_alloc_security and smack_msg_queue_alloc_security into smack_ipc_alloc_security as the three functions had become identical. Collapse smack_shm_free_security, smack_sem_free_security and smack_msg_queue_free_security into smack_ipc_free_security as the three functions had become identical. Requested-by: Casey Schaufler <casey@schaufler-ca.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-03-27ipc: Directly call the security hook in ipc_ops.associateEric W. Biederman
After the last round of cleanups the shm, sem, and msg associate operations just became trivial wrappers around the appropriate security method. Simplify things further by just calling the security method directly. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-03-27ipc/sem: Fix semctl(..., GETPID, ...) between pid namespacesEric W. Biederman
Today the last process to update a semaphore is remembered and reported in the pid namespace of that process. If there are processes in any other pid namespace querying that process id with GETPID the result will be unusable nonsense as it does not make any sense in your own pid namespace. Due to ipc_update_pid I don't think you will be able to get System V ipc semaphores into a troublesome cache line ping-pong. Using struct pids from separate process are not a problem because they do not share a cache line. Using struct pid from different threads of the same process are unlikely to be a problem as the reference count update can be avoided. Further linux futexes are a much better tool for the job of mutual exclusion between processes than System V semaphores. So I expect programs that are performance limited by their interprocess mutual exclusion primitive will be using futexes. So while it is possible that enhancing the storage of the last rocess of a System V semaphore from an integer to a struct pid will cause a performance regression because of the effect of frequently updating the pid reference count. I don't expect that to happen in practice. This change updates semctl(..., GETPID, ...) to return the process id of the last process to update a semphore inthe pid namespace of the calling process. Fixes: b488893a390e ("pid namespaces: changes to show virtual ids to user") Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-03-27ipc/msg: Fix msgctl(..., IPC_STAT, ...) between pid namespacesEric W. Biederman
Today msg_lspid and msg_lrpid are remembered in the pid namespace of the creator and the processes that last send or received a sysvipc message. If you have processes in multiple pid namespaces that is just wrong. The process ids reported will not make the least bit of sense. This fix is slightly more susceptible to a performance problem than the related fix for System V shared memory. By definition the pids are updated by msgsnd and msgrcv, the fast path of System V message queues. The only concern over the previous implementation is the incrementing and decrementing of the pid reference count. As that is the only difference and multiple updates by of the task_tgid by threads in the same process have been shown in af_unix sockets to create a cache line ping-pong between cpus of the same processor. In this case I don't expect cache lines holding pid reference counts to ping pong between cpus. As senders and receivers update different pids there is a natural separation there. Further if multiple threads of the same process either send or receive messages the pid will be updated to the same value and ipc_update_pid will avoid the reference count update. Which means in the common case I expect msg_lspid and msg_lrpid to remain constant, and reference counts not to be updated when messages are sent. In rare cases it may be possible to trigger the issue which was observed for af_unix sockets, but it will require multiple processes with multiple threads to be either sending or receiving messages. It just does not feel likely that anyone would do that in practice. This change updates msgctl(..., IPC_STAT, ...) to return msg_lspid and msg_lrpid in the pid namespace of the process calling stat. This change also updates cat /proc/sysvipc/msg to return print msg_lspid and msg_lrpid in the pid namespace of the process that opened the proc file. Fixes: b488893a390e ("pid namespaces: changes to show virtual ids to user") Reviewed-by: Nagarathnam Muthusamy <nagarathnam.muthusamy@oracle.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-03-27ipc/shm: Fix shmctl(..., IPC_STAT, ...) between pid namespaces.Eric W. Biederman
Today shm_cpid and shm_lpid are remembered in the pid namespace of the creator and the processes that last touched a sysvipc shared memory segment. If you have processes in multiple pid namespaces that is just wrong, and I don't know how this has been over-looked for so long. As only creation and shared memory attach and shared memory detach update the pids I do not expect there to be a repeat of the issues when struct pid was attached to each af_unix skb, which in some notable cases cut the performance in half. The problem was threads of the same process updating same struct pid from different cpus causing the cache line to be highly contended and bounce between cpus. As creation, attach, and detach are expected to be rare operations for sysvipc shared memory segments I do not expect that kind of cache line ping pong to cause probems. In addition because the pid is at a fixed location in the structure instead of being dynamic on a skb, the reference count of the pid does not need to be updated on each operation if the pid is the same. This ability to simply skip the pid reference count changes if the pid is unchanging further reduces the likelihood of the a cache line holding a pid reference count ping-ponging between cpus. Fixes: b488893a390e ("pid namespaces: changes to show virtual ids to user") Reviewed-by: Nagarathnam Muthusamy <nagarathnam.muthusamy@oracle.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-03-27bpf: follow idr code conventionShaohua Li
Generally we do a preload before doing idr allocation. This also help improve the allocation success rate in memory pressure. Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Shaohua Li <shli@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-03-27RDMA/ucma: Check that device exists prior to accessing itLeon Romanovsky
Ensure that device exists prior to accessing its properties. Reported-by: <syzbot+71655d44855ac3e76366@syzkaller.appspotmail.com> Fixes: 75216638572f ("RDMA/cma: Export rdma cm interface to userspace") Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-03-27RDMA/ucma: Check that device is connected prior to access itLeon Romanovsky
Add missing check that device is connected prior to access it. [ 55.358652] BUG: KASAN: null-ptr-deref in rdma_init_qp_attr+0x4a/0x2c0 [ 55.359389] Read of size 8 at addr 00000000000000b0 by task qp/618 [ 55.360255] [ 55.360432] CPU: 1 PID: 618 Comm: qp Not tainted 4.16.0-rc1-00071-gcaf61b1b8b88 #91 [ 55.361693] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014 [ 55.363264] Call Trace: [ 55.363833] dump_stack+0x5c/0x77 [ 55.364215] kasan_report+0x163/0x380 [ 55.364610] ? rdma_init_qp_attr+0x4a/0x2c0 [ 55.365238] rdma_init_qp_attr+0x4a/0x2c0 [ 55.366410] ucma_init_qp_attr+0x111/0x200 [ 55.366846] ? ucma_notify+0xf0/0xf0 [ 55.367405] ? _get_random_bytes+0xea/0x1b0 [ 55.367846] ? urandom_read+0x2f0/0x2f0 [ 55.368436] ? kmem_cache_alloc_trace+0xd2/0x1e0 [ 55.369104] ? refcount_inc_not_zero+0x9/0x60 [ 55.369583] ? refcount_inc+0x5/0x30 [ 55.370155] ? rdma_create_id+0x215/0x240 [ 55.370937] ? _copy_to_user+0x4f/0x60 [ 55.371620] ? mem_cgroup_commit_charge+0x1f5/0x290 [ 55.372127] ? _copy_from_user+0x5e/0x90 [ 55.372720] ucma_write+0x174/0x1f0 [ 55.373090] ? ucma_close_id+0x40/0x40 [ 55.373805] ? __lru_cache_add+0xa8/0xd0 [ 55.374403] __vfs_write+0xc4/0x350 [ 55.374774] ? kernel_read+0xa0/0xa0 [ 55.375173] ? fsnotify+0x899/0x8f0 [ 55.375544] ? fsnotify_unmount_inodes+0x170/0x170 [ 55.376689] ? __fsnotify_update_child_dentry_flags+0x30/0x30 [ 55.377522] ? handle_mm_fault+0x174/0x320 [ 55.378169] vfs_write+0xf7/0x280 [ 55.378864] SyS_write+0xa1/0x120 [ 55.379270] ? SyS_read+0x120/0x120 [ 55.379643] ? mm_fault_error+0x180/0x180 [ 55.380071] ? task_work_run+0x7d/0xd0 [ 55.380910] ? __task_pid_nr_ns+0x120/0x140 [ 55.381366] ? SyS_read+0x120/0x120 [ 55.381739] do_syscall_64+0xeb/0x250 [ 55.382143] entry_SYSCALL_64_after_hwframe+0x21/0x86 [ 55.382841] RIP: 0033:0x7fc2ef803e99 [ 55.383227] RSP: 002b:00007fffcc5f3be8 EFLAGS: 00000217 ORIG_RAX: 0000000000000001 [ 55.384173] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fc2ef803e99 [ 55.386145] RDX: 0000000000000057 RSI: 0000000020000080 RDI: 0000000000000003 [ 55.388418] RBP: 00007fffcc5f3c00 R08: 0000000000000000 R09: 0000000000000000 [ 55.390542] R10: 0000000000000000 R11: 0000000000000217 R12: 0000000000400480 [ 55.392916] R13: 00007fffcc5f3cf0 R14: 0000000000000000 R15: 0000000000000000 [ 55.521088] Code: e5 4d 1e ff 48 89 df 44 0f b6 b3 b8 01 00 00 e8 65 50 1e ff 4c 8b 2b 49 8d bd b0 00 00 00 e8 56 50 1e ff 41 0f b6 c6 48 c1 e0 04 <49> 03 85 b0 00 00 00 48 8d 78 08 48 89 04 24 e8 3a 4f 1e ff 48 [ 55.525980] RIP: rdma_init_qp_attr+0x52/0x2c0 RSP: ffff8801e2c2f9d8 [ 55.532648] CR2: 00000000000000b0 [ 55.534396] ---[ end trace 70cee64090251c0b ]--- Fixes: 75216638572f ("RDMA/cma: Export rdma cm interface to userspace") Fixes: d541e45500bd ("IB/core: Convert ah_attr from OPA to IB when copying to user") Reported-by: <syzbot+7b62c837c2516f8f38c8@syzkaller.appspotmail.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-03-27RDMA/rdma_cm: Fix use after free race with process_one_reqJason Gunthorpe
process_one_req() can race with rdma_addr_cancel(): CPU0 CPU1 ==== ==== process_one_work() debug_work_deactivate(work); process_one_req() rdma_addr_cancel() mutex_lock(&lock); set_timeout(&req->work,..); __queue_work() debug_work_activate(work); mutex_unlock(&lock); mutex_lock(&lock); [..] list_del(&req->list); mutex_unlock(&lock); [..] // ODEBUG explodes since the work is still queued. kfree(req); Causing ODEBUG to detect the use after free: ODEBUG: free active (active state 0) object type: work_struct hint: process_one_req+0x0/0x6c0 include/net/dst.h:165 WARNING: CPU: 0 PID: 79 at lib/debugobjects.c:291 debug_print_object+0x166/0x220 lib/debugobjects.c:288 kvm: emulating exchange as write Kernel panic - not syncing: panic_on_warn set ... CPU: 0 PID: 79 Comm: kworker/u4:3 Not tainted 4.16.0-rc6+ #361 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: ib_addr process_one_req Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x24d lib/dump_stack.c:53 panic+0x1e4/0x41c kernel/panic.c:183 __warn+0x1dc/0x200 kernel/panic.c:547 report_bug+0x1f4/0x2b0 lib/bug.c:186 fixup_bug.part.11+0x37/0x80 arch/x86/kernel/traps.c:178 fixup_bug arch/x86/kernel/traps.c:247 [inline] do_error_trap+0x2d7/0x3e0 arch/x86/kernel/traps.c:296 do_invalid_op+0x1b/0x20 arch/x86/kernel/traps.c:315 invalid_op+0x1b/0x40 arch/x86/entry/entry_64.S:986 RIP: 0010:debug_print_object+0x166/0x220 lib/debugobjects.c:288 RSP: 0000:ffff8801d966f210 EFLAGS: 00010086 RAX: dffffc0000000008 RBX: 0000000000000003 RCX: ffffffff815acd6e RDX: 0000000000000000 RSI: 1ffff1003b2cddf2 RDI: 0000000000000000 RBP: ffff8801d966f250 R08: 0000000000000000 R09: 1ffff1003b2cddc8 R10: ffffed003b2cde71 R11: ffffffff86f39a98 R12: 0000000000000001 R13: ffffffff86f15540 R14: ffffffff86408700 R15: ffffffff8147c0a0 __debug_check_no_obj_freed lib/debugobjects.c:745 [inline] debug_check_no_obj_freed+0x662/0xf1f lib/debugobjects.c:774 kfree+0xc7/0x260 mm/slab.c:3799 process_one_req+0x2e7/0x6c0 drivers/infiniband/core/addr.c:592 process_one_work+0xc47/0x1bb0 kernel/workqueue.c:2113 worker_thread+0x223/0x1990 kernel/workqueue.c:2247 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Fixes: 5fff41e1f89d ("IB/core: Fix race condition in resolving IP to MAC") Reported-by: <syzbot+3b4acab09b6463472d0a@syzkaller.appspotmail.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-03-27Merge branch 'sfc-filter-locking'David S. Miller
Edward Cree says: ==================== sfc: rework locking around filter management The use of a spinlock to protect filter state combined with the need for a sleeping operation (MCDI) to apply that state to the NIC (on EF10) led to unfixable race conditions, around the handling of filter restoration after an MC reboot. So, this patch series removes the requirement to be able to modify the SW filter table from atomic context, by using a workqueue to request asynchronous filter operations (which are needed for ARFS). Then, the filter table locks are changed to mutexes, replacing the dance of spinlocks and 'busy' flags. Also, a mutex is added to protect the RSS context state, since otherwise a similar race is possible around restoring that after an MC reboot. While we're at it, fix a couple of other related bugs. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27sfc: fix flow type handling for RSS filtersEdward Cree
The FLOW_RSS flag was causing us to insert UDP filters when TCP was wanted. Fixes: 42356d9a137b ("sfc: support RSS spreading of ethtool ntuple filters") Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27sfc: protect list of RSS contexts under a mutexEdward Cree
Otherwise races are possible between ethtool ops and efx_ef10_rx_restore_rss_contexts(). Also, don't try to perform the restore on every reset, only after an MC reboot, otherwise we'll leak RSS contexts on the NIC. Fixes: 42356d9a137b ("sfc: support RSS spreading of ethtool ntuple filters") Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27sfc: return a better error if filter insertion collides with MC rebootEdward Cree
If some other operation gets the MCDI lock ahead of us and performs an MC reboot, then our attempt to insert the filter will fail with EINVAL, because the destination VI (spec->dmaq_id, MC_CMD_FILTER_OP_IN_RX_QUEUE) does not exist. But the caller's request (which might e.g. be an ethtool ntuple request from userland) isn't invalid, it just got unlucky; so return EAGAIN. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27sfc: use a semaphore to lock farch filters tooEdward Cree
With this change, the spinlock efx->filter_lock is no longer used and is thus removed. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27sfc: give ef10 its own rwsem in the filter table instead of filter_lockEdward Cree
efx->filter_lock remains in place for use on farch, but EF10 now ignores it. EFX_EF10_FILTER_FLAG_BUSY is no longer needed, hence it is removed. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27sfc: replace asynchronous filter operationsEdward Cree
Instead of having an efx->type->filter_rfs_insert() method, just use workitems with a worker function that calls efx->type->filter_insert(). The only user of this is efx_filter_rfs(), which now queues a call to efx_filter_rfs_work(). Similarly, efx_filter_rfs_expire() is now a worker function called on a new channel->filter_work work_struct, so the method efx->type->filter_rfs_expire_one() is no longer called in atomic context. We also add a new mutex efx->rps_mutex to protect the RPS state (efx-> rps_expire_channel, efx->rps_expire_index, and channel->rps_flow_id) so that the taking of efx->filter_lock can be moved to efx->type->filter_rfs_expire_one(). Thus, all filter table functions are now called in a sleepable context, allowing them to use sleeping locks in a future patch. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27Merge branch 'pernet-all-async'David S. Miller
Kirill Tkhai says: ==================== Make pernet_operations always read locked All the pernet_operations are converted, and the last one is in this patchset (nfsd_net_ops acked by J. Bruce Fields). So, it's the time to kill pernet_operations::async field, and make setup_net() and cleanup_net() always require the rwsem only read locked. All further pernet_operations have to be developed to fit this rule. Some of previous patches added a comment to struct pernet_operations about that. Also, this patchset renames net_sem to pernet_ops_rwsem to make the target area of the rwsem is more clear visible, and adds more comments. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27net: Add more commentsKirill Tkhai
This adds comments to different places to improve readability. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27net: Rename net_sem to pernet_ops_rwsemKirill Tkhai
net_sem is some undefined area name, so it will be better to make the area more defined. Rename it to pernet_ops_rwsem for better readability and better intelligibility. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27net: Drop pernet_operations::asyncKirill Tkhai
Synchronous pernet_operations are not allowed anymore. All are asynchronous. So, drop the structure member. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27net: Reflect all pernet_operations are convertedKirill Tkhai
All pernet_operations are reviewed and converted, hooray! Reflect this in core code: setup_net() and cleanup_net() will take down_read() always. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27net: Convert nfsd_net_opsKirill Tkhai
These pernet_operations look similar to rpcsec_gss_net_ops, they just create and destroy another caches. So, they also can be async. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27net: mvpp2: Use relaxed I/O in data pathYan Markman
Use relaxed I/O on the hot path. This achieves significant performance improvements. On a 10G link, this makes a basic iperf TCP test go from an average of 4.5 Gbits/sec to about 9.40 Gbits/sec. Signed-off-by: Yan Markman <ymarkman@marvell.com> [Maxime: Commit message, cosmetic changes] Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-27qede: Fix barrier usage after tx doorbell write.Manish Chopra
Since commit c5ad119fb6c09b0297446be05bd66602fa564758 ("net: sched: pfifo_fast use skb_array") driver is exposed to an issue where it is hitting NULL skbs while handling TX completions. Driver uses mmiowb() to flush the writes to the doorbell bar which is a write-combined bar, however on x86 mmiowb() does not flush the write combined buffer. This patch fixes this problem by replacing mmiowb() with wmb() after the write combined doorbell write so that writes are flushed and synchronized from more than one processor. V1->V2: ------- This patch was marked as "superseded" in patchwork. (Not really sure for what reason).Resending it as v2. Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: Manish Chopra <manish.chopra@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>