summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-05-09RDMA/i40iw: Avoid reference leaks when processing the AEQAndrew Boyer
In this switch there is a reference held on the QP. 'continue' will grab the next event without releasing the reference, causing a leak. Change it to 'break' to drop the reference before grabbing the next event. Fixes: 4e9042e647ff ("i40iw: add hw and utils files") Signed-off-by: Andrew Boyer <andrew.boyer@dell.com> Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/i40iw: Avoid panic when objects are being created and destroyedAndrew Boyer
A panic occurs when there is a newly-registered element on the QP/CQ MR list waiting to be attached, but a different MR is deregistered. The current code only checks for whether the list is empty, not whether the element being deregistered is actually on the list. Fix the panic by adding a boolean to track if the object is on the list. Fixes: d37498417947 ("i40iw: add files for iwarp interface") Signed-off-by: Andrew Boyer <andrew.boyer@dell.com> Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Fix the bug with NULL pointeroulijun
When the last QP of eight QPs is not exist in hns_roce_v1_mr_free_work_fn function, the print for qpn of hr_qp may introduce a calltrace for NULL pointer. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Set NULL for __internal_mroulijun
This patch mainly configure value for __internal_mr of mr_free_pd. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Enable inner_pa_vld filed of mptoulijun
When enabled inner_pa_vld field of mpt, The pa0 and pa1 will be valid and the hardware will use it directly and not use base address of pbl. As a result, it can reduce the delay. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Set desc_dma_addr for zero when free cmq descoulijun
In order to avoid illegal use for desc_dma_addr of ring, it needs to set it zero when free cmq desc. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Fix the bug with rq sgeoulijun
When received multiply rq sge, it should tag the invalid lkey for the last non-zero length sge when have some sges' length are zero. This patch fixes it. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Not support qp transition from reset to reset for hip06oulijun
Because hip06 hardware is not support for qp transition from reset to reset state, it need to return errno when qp transited from reset to reset. This patch fixes it. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Add return operation when configured global param failoulijun
When configure global param function run fail, it should directly return and the initial flow will stop. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Update convert function of endian formatoulijun
Because the sys_image_guid of ib_device_attr structure is __be64, it need to use cpu_to_be64 for converting. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Load the RoCE dirver automaticallyoulijun
To enable the linux-kernel system to load the hns-roce-hw-v2 driver automatically when hns-roce-hw-v2 is plugged in pci bus, it need to create a MODULE_DEVICE_TABLE for expose the pci_table of hns-roce-hw-v2 to user. Signed-off-by: Lijun Ou <oulijun@huawei.com> Reported-by: Zhou Wang <wangzhou1@hisilicon.com> Tested-by: Xiaojun Tan <tanxiaojun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Bugfix for rq record db for kerneloulijun
When used rq record db for kernel, it needs to set the rdb_en of hr_qp to 1 and configures the dma address of record rq db of qp context. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/hns: Add rq inline flags judgementoulijun
It needs to set the rqie field of qp context by configured rq inline flags. Besides, it need to decide whether posting inline rqwqe by judged rq inline flags. Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09brd: Mark as non-rotationalSeongJae Park
This commit sets QUEUE_FLAG_NONROT and clears up QUEUE_FLAG_ADD_RANDOM to mark the ramdisks as non-rotational device. Signed-off-by: SeongJae Park <sj38.park@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-09nvmet,rxe: defer ip datagram sending to taskletAlexandru Moise
This addresses 3 separate problems: 1. When using NVME over Fabrics we may end up sending IP packets in interrupt context, we should defer this work to a tasklet. [ 50.939957] WARNING: CPU: 3 PID: 0 at kernel/softirq.c:161 __local_bh_enable_ip+0x1f/0xa0 [ 50.942602] CPU: 3 PID: 0 Comm: swapper/3 Kdump: loaded Tainted: G W 4.17.0-rc3-ARCH+ #104 [ 50.945466] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-20171110_100015-anatol 04/01/2014 [ 50.948163] RIP: 0010:__local_bh_enable_ip+0x1f/0xa0 [ 50.949631] RSP: 0018:ffff88009c183900 EFLAGS: 00010006 [ 50.951029] RAX: 0000000080010403 RBX: 0000000000000200 RCX: 0000000000000001 [ 50.952636] RDX: 0000000000000000 RSI: 0000000000000200 RDI: ffffffff817e04ec [ 50.954278] RBP: ffff88009c183910 R08: 0000000000000001 R09: 0000000000000614 [ 50.956000] R10: ffffea00021d5500 R11: 0000000000000001 R12: ffffffff817e04ec [ 50.957779] R13: 0000000000000000 R14: ffff88009566f400 R15: ffff8800956c7000 [ 50.959402] FS: 0000000000000000(0000) GS:ffff88009c180000(0000) knlGS:0000000000000000 [ 50.961552] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 50.963798] CR2: 000055c4ec0ccac0 CR3: 0000000002209001 CR4: 00000000000606e0 [ 50.966121] Call Trace: [ 50.966845] <IRQ> [ 50.967497] __dev_queue_xmit+0x62d/0x690 [ 50.968722] dev_queue_xmit+0x10/0x20 [ 50.969894] neigh_resolve_output+0x173/0x190 [ 50.971244] ip_finish_output2+0x2b8/0x370 [ 50.972527] ip_finish_output+0x1d2/0x220 [ 50.973785] ? ip_finish_output+0x1d2/0x220 [ 50.975010] ip_output+0xd4/0x100 [ 50.975903] ip_local_out+0x3b/0x50 [ 50.976823] rxe_send+0x74/0x120 [ 50.977702] rxe_requester+0xe3b/0x10b0 [ 50.978881] ? ip_local_deliver_finish+0xd1/0xe0 [ 50.980260] rxe_do_task+0x85/0x100 [ 50.981386] rxe_run_task+0x2f/0x40 [ 50.982470] rxe_post_send+0x51a/0x550 [ 50.983591] nvmet_rdma_queue_response+0x10a/0x170 [ 50.985024] __nvmet_req_complete+0x95/0xa0 [ 50.986287] nvmet_req_complete+0x15/0x60 [ 50.987469] nvmet_bio_done+0x2d/0x40 [ 50.988564] bio_endio+0x12c/0x140 [ 50.989654] blk_update_request+0x185/0x2a0 [ 50.990947] blk_mq_end_request+0x1e/0x80 [ 50.991997] nvme_complete_rq+0x1cc/0x1e0 [ 50.993171] nvme_pci_complete_rq+0x117/0x120 [ 50.994355] __blk_mq_complete_request+0x15e/0x180 [ 50.995988] blk_mq_complete_request+0x6f/0xa0 [ 50.997304] nvme_process_cq+0xe0/0x1b0 [ 50.998494] nvme_irq+0x28/0x50 [ 50.999572] __handle_irq_event_percpu+0xa2/0x1c0 [ 51.000986] handle_irq_event_percpu+0x32/0x80 [ 51.002356] handle_irq_event+0x3c/0x60 [ 51.003463] handle_edge_irq+0x1c9/0x200 [ 51.004473] handle_irq+0x23/0x30 [ 51.005363] do_IRQ+0x46/0xd0 [ 51.006182] common_interrupt+0xf/0xf [ 51.007129] </IRQ> 2. Work must always be offloaded to tasklet for rxe_post_send_kernel() when using NVMEoF in order to solve lock ordering between neigh->ha_lock seqlock and the nvme queue lock: [ 77.833783] Possible interrupt unsafe locking scenario: [ 77.833783] [ 77.835831] CPU0 CPU1 [ 77.837129] ---- ---- [ 77.838313] lock(&(&n->ha_lock)->seqcount); [ 77.839550] local_irq_disable(); [ 77.841377] lock(&(&nvmeq->q_lock)->rlock); [ 77.843222] lock(&(&n->ha_lock)->seqcount); [ 77.845178] <Interrupt> [ 77.846298] lock(&(&nvmeq->q_lock)->rlock); [ 77.847986] [ 77.847986] *** DEADLOCK *** 3. Same goes for the lock ordering between sch->q.lock and nvme queue lock: [ 47.634271] Possible interrupt unsafe locking scenario: [ 47.634271] [ 47.636452] CPU0 CPU1 [ 47.637861] ---- ---- [ 47.639285] lock(&(&sch->q.lock)->rlock); [ 47.640654] local_irq_disable(); [ 47.642451] lock(&(&nvmeq->q_lock)->rlock); [ 47.644521] lock(&(&sch->q.lock)->rlock); [ 47.646480] <Interrupt> [ 47.647263] lock(&(&nvmeq->q_lock)->rlock); [ 47.648492] [ 47.648492] *** DEADLOCK *** Using NVMEoF after this patch seems to finally be stable, without it, rxe eventually deadlocks the whole system and causes RCU stalls. Signed-off-by: Alexandru Moise <00moses.alexander00@gmail.com> Reviewed-by: Zhu Yanjun <yanjun.zhu@oracle.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09i40iw: Use correct address in dst_neigh_lookup for IPv6Mustafa Ismail
Use of incorrect structure address for IPv6 neighbor lookup causes connections to IPv6 addresses to fail. Fix this by using correct address in call to dst_neigh_lookup. Fixes: f27b4746f378 ("i40iw: add connection management code") Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09i40iw: Fix memory leak in error path of create QPMustafa Ismail
If i40iw_allocate_dma_mem fails when creating a QP, the memory allocated for the QP structure using kzalloc is not freed because iwqp->allocated_buffer is used to free the memory and it is not setup until later. Fix this by setting iwqp->allocated_buffer before allocating the dma memory. Fixes: d37498417947 ("i40iw: add files for iwarp interface") Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/mlx5: Use proper spec flow label typeDaria Velikovsky
Flow label is defined as u32 in the in ipv6 flow spec, but used internally in the flow specs parsing as u8. That was causing loss of part of flow_label value. Fixes: 2d1e697e9b716 ('IB/mlx5: Add support to match inner packet fields') Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Daria Velikovsky <daria@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09RDMA/mlx5: Don't assume that medium blueFlame register existsYishai Hadas
User can leave system without medium BlueFlames registers, however the code assumed that at least one such register exists. This patch fixes that assumption. Fixes: c1be5232d21d ("IB/mlx5: Fix micro UAR allocator") Reported-by: Rohit Zambre <rzambre@uci.edu> Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09IB/hfi1: Use after free race condition in send context error pathMichael J. Ruhl
A pio send egress error can occur when the PSM library attempts to to send a bad packet. That issue is still being investigated. The pio error interrupt handler then attempts to progress the recovery of the errored pio send context. Code inspection reveals that the handling lacks the necessary locking if that recovery interleaves with a PSM close of the "context" object contains the pio send context. The lack of the locking can cause the recovery to access the already freed pio send context object and incorrectly deduce that the pio send context is actually a kernel pio send context as shown by the NULL deref stack below: [<ffffffff8143d78c>] _dev_info+0x6c/0x90 [<ffffffffc0613230>] sc_restart+0x70/0x1f0 [hfi1] [<ffffffff816ab124>] ? __schedule+0x424/0x9b0 [<ffffffffc06133c5>] sc_halted+0x15/0x20 [hfi1] [<ffffffff810aa3ba>] process_one_work+0x17a/0x440 [<ffffffff810ab086>] worker_thread+0x126/0x3c0 [<ffffffff810aaf60>] ? manage_workers.isra.24+0x2a0/0x2a0 [<ffffffff810b252f>] kthread+0xcf/0xe0 [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40 [<ffffffff816b8798>] ret_from_fork+0x58/0x90 [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40 This is the best case scenario and other scenarios can corrupt the already freed memory. Fix by adding the necessary locking in the pio send context error handler. Cc: <stable@vger.kernel.org> # 4.9.x Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09block: consolidate struct request timestamp fieldsOmar Sandoval
Currently, struct request has four timestamp fields: - A start time, set at get_request time, in jiffies, used for iostats - An I/O start time, set at start_request time, in ktime nanoseconds, used for blk-stats (i.e., wbt, kyber, hybrid polling) - Another start time and another I/O start time, used for cfq and bfq These can all be consolidated into one start time and one I/O start time, both in ktime nanoseconds, shaving off up to 16 bytes from struct request depending on the kernel config. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-09block: move blk_stat_add() to __blk_mq_end_request()Omar Sandoval
We want this next to blk_account_io_done() for the next change so that we can call ktime_get() only once for both. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-09block: use ktime_get_ns() instead of sched_clock() for cfq and bfqOmar Sandoval
cfq and bfq have some internal fields that use sched_clock() which can trivially use ktime_get_ns() instead. Their timestamp fields in struct request can also use ktime_get_ns(), which resolves the 8 year old comment added by commit 28f4197e5d47 ("block: disable preemption before using sched_clock()"). Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-09block: get rid of struct blk_issue_statOmar Sandoval
struct blk_issue_stat squashes three things into one u64: - The time the driver started working on a request - The original size of the request (for the io.low controller) - Flags for writeback throttling It turns out that on x86_64, we have a 4 byte hole in struct request which we can fill with the non-timestamp fields from blk_issue_stat, simplifying things quite a bit. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-09block: replace bio->bi_issue_stat with bio-specific typeOmar Sandoval
struct blk_issue_stat is going away, and bio->bi_issue_stat doesn't even use the blk-stats interface, so we can provide a separate implementation specific for bios. The helpers work the same way as the blk-stats helpers. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-09block: pass struct request instead of struct blk_issue_stat to wbtOmar Sandoval
issue_stat is going to go away, so first make writeback throttling take the containing request, update the internal wbt helpers accordingly, and change rwb->sync_cookie to be the request pointer instead of the issue_stat pointer. No functional change. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-09block: move some wbt helpers to blk-wbt.cOmar Sandoval
A few helpers are only used from blk-wbt.c, so move them there, and put wbt_track() behind the CONFIG_BLK_WBT typedef. This is in preparation for changing how the wbt flags are tracked. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-05-09arm64: capabilities: Add NVIDIA Denver CPU to bp_harden listDavid Gilhooley
The NVIDIA Denver CPU also needs a PSCI call to harden the branch predictor. Signed-off-by: David Gilhooley <dgilhooley@nvidia.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-05-09arm64: Add MIDR encoding for NVIDIA CPUsDavid Gilhooley
This patch adds the MIDR encodings for NVIDIA as well as the Denver and Carmel CPUs used in Tegra SoCs. Signed-off-by: David Gilhooley <dgilhooley@nvidia.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-05-09MAINTAINERS: Remove bouncing @mellanox.com addressesLeon Romanovsky
Delete non-existent @mellanox.com addresses from MAINTAINERS file. Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09IB: remove redundant INFINIBAND kconfig dependenciesGreg Thelen
INFINIBAND_ADDR_TRANS depends on INFINIBAND. So there's no need for options which depend INFINIBAND_ADDR_TRANS to also depend on INFINIBAND. Remove the unnecessary INFINIBAND depends. Signed-off-by: Greg Thelen <gthelen@google.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-05-09HID: i2c-hid: Add RESEND_REPORT_DESCR quirk for Toshiba Click Mini L9W-BHans de Goede
The 0457:10fb touchscreen found on the Toshiba Click Mini L9W-B needs to have a report-decriptors command send to it on resume in order for the touchscreen to start generating events again on resume. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Acked-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2018-05-09PCI / PM: Always check PME wakeup capability for runtime wakeup supportKai Heng Feng
USB controller ASM1042 stops working after commit de3ef1eb1cd0 (PM / core: Drop run_wake flag from struct dev_pm_info). The device in question is not power managed by platform firmware, furthermore, it only supports PME# from D3cold: Capabilities: [78] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=55mA PME(D0-,D1-,D2-,D3hot-,D3cold+) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Before commit de3ef1eb1cd0, the device never gets runtime suspended. After that commit, the device gets runtime suspended to D3hot, which can not generate any PME#. usb_hcd_pci_probe() unconditionally calls device_wakeup_enable(), hence device_can_wakeup() in pci_dev_run_wake() always returns true. So pci_dev_run_wake() needs to check PME wakeup capability as its first condition. In addition, change wakeup flag passed to pci_target_state() from false to true, because we want to find the deepest state different from D3cold that the device can still generate PME#. In this case, it's D0 for the device in question. Fixes: de3ef1eb1cd0 (PM / core: Drop run_wake flag from struct dev_pm_info) Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Cc: 4.13+ <stable@vger.kernel.org> # 4.13+ Acked-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-05-09cpufreq: schedutil: Avoid using invalid next_freqRafael J. Wysocki
If the next_freq field of struct sugov_policy is set to UINT_MAX, it shouldn't be used for updating the CPU frequency (this is a special "invalid" value), but after commit b7eaf1aab9f8 (cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely) it may be passed as the new frequency to sugov_update_commit() in sugov_update_single(). Fix that by adding an extra check for the special UINT_MAX value of next_freq to sugov_update_single(). Fixes: b7eaf1aab9f8 (cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely) Reported-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: 4.12+ <stable@vger.kernel.org> # 4.12+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-05-09cpufreq: schedutil: remove stale commentJuri Lelli
After commit 794a56ebd9a57 (sched/cpufreq: Change the worker kthread to SCHED_DEADLINE) schedutil kthreads are "ignored" for a clock frequency selection point of view, so the potential corner case for RT tasks is not possible at all now. Remove the stale comment mentioning it. Signed-off-by: Juri Lelli <juri.lelli@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-05-09PM: docs: intel_pstate: fix Active Mode w/o HWP paragraphJuri Lelli
P-state selection algorithm (powersave or performance) is selected by echoing the desired choice to scaling_governor sysfs attribute and not to scaling_cur_freq (as currently stated). Fix it. Signed-off-by: Juri Lelli <juri.lelli@redhat.com> Reviewed-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-05-09PM: docs: sleep-states: Fix a typo ("includig")Jonathan Neuschäfer
Fix a typo in admin-guide/pm/sleep-states.rst. Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-05-09spi: remove the older/duplicated bcm53xx driverRafał Miłecki
This driver was added by commit 0fc6a323e1917 ("spi: bcm53xx: driver for SPI controller on Broadcom bcma SoC") back in 2014. It was needed to provide a minimal support for SPI controller on BCM5301X (AKA Northstar) devices. An alternative driver was added by Kamal in commit fa236a7ef2404 ("spi: bcm-qspi: Add Broadcom MSPI driver") 2 years later. It supports the same hardware but for some reason a new driver has been developed for it. At this point the new driver supports: more modes, setting a speed, setting bits per word and uses IRQs instead of polling. DTS file for BCM5301X has also been updated in the commit 1c8f406507238 ("ARM: dts: BCM5301X: convert to iProc QSPI") - over a year ago. That explained I see no reason to keep the old driver alive. Signed-off-by: Rafał Miłecki <rafal@milecki.pl> Signed-off-by: Mark Brown <broonie@kernel.org>
2018-05-09netfilter: nf_tables: bogus EBUSY in chain deletionsPablo Neira Ayuso
When removing a rule that jumps to chain and such chain in the same batch, this bogusly hits EBUSY. Add activate and deactivate operations to expression that can be called from the preparation and the commit/abort phases. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2018-05-09netfilter: nft_compat: fix handling of large matchinfo sizeFlorian Westphal
currently matchinfo gets stored in the expression, but some xt matches are very large. To handle those we either need to switch nft core to kvmalloc and increase size limit, or allocate the info blob of large matches separately. This does the latter, this limits the scope of the changes to nft_compat. I picked a threshold of 192, this allows most matches to work as before and handle only few ones via separate alloation (cgroup, u32, sctp, rt). Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2018-05-09netfilter: nft_compat: prepare for indirect info storageFlorian Westphal
Next patch will make it possible for *info to be stored in a separate allocation instead of the expr private area. This removes the 'expr priv area is info blob' assumption from the match init/destroy/eval functions. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2018-05-09drm/vc4: Fix scaling of uni-planar formatsBoris Brezillon
When using uni-planar formats (like RGB), the scaling parameters are stored in plane 0, not plane 1. Fixes: fc04023fafec ("drm/vc4: Add support for YUV planes.") Cc: stable@vger.kernel.org Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com> Reviewed-by: Eric Anholt <eric@anholt.net> Link: https://patchwork.freedesktop.org/patch/msgid/20180507121303.5610-1-boris.brezillon@bootlin.com
2018-05-09swiotlb: update comments to refer to physical instead of virtual addressesYisheng Xie
swiotlb use physical address of bounce buffer when do map and unmap, therefore, related comment should be updated. Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-09swiotlb: remove the CONFIG_DMA_DIRECT_OPS ifdefsChristoph Hellwig
swiotlb now selects the DMA_DIRECT_OPS config symbol, so this will always be true. Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-09swiotlb: move the SWIOTLB config symbol to lib/KconfigChristoph Hellwig
This way we have one central definition of it, and user can select it as needed. The new option is not user visible, which is the behavior it had in most architectures, with a few notable exceptions: - On x86_64 and mips/loongson3 it used to be user selectable, but defaulted to y. It now is unconditional, which seems like the right thing for 64-bit architectures without guaranteed availablity of IOMMUs. - on powerpc the symbol is user selectable and defaults to n, but many boards select it. This change assumes no working setup required a manual selection, but if that turned out to be wrong we'll have to add another select statement or two for the respective boards. Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-09mips,unicore32: swiotlb doesn't need sg->dma_lengthChristoph Hellwig
Only mips and unicore32 select CONFIG_NEED_SG_DMA_LENGTH when building swiotlb. swiotlb itself never merges segements and doesn't accesses the dma_length field directly, so drop the dependency. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: James Hogan <jhogan@kernel.org>
2018-05-09arm: don't build swiotlb by defaultChristoph Hellwig
swiotlb is only used as a library of helper for xen-swiotlb if Xen support is enabled on arm, so don't build it by default. Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-09PCI: remove CONFIG_PCI_BUS_ADDR_T_64BITChristoph Hellwig
This symbol is now always identical to CONFIG_ARCH_DMA_ADDR_T_64BIT, so remove it. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com>
2018-05-09arch: define the ARCH_DMA_ADDR_T_64BIT config symbol in lib/KconfigChristoph Hellwig
Define this symbol if the architecture either uses 64-bit pointers or the PHYS_ADDR_T_64BIT is set. This covers 95% of the old arch magic. We only need an additional select for Xen on ARM (why anyway?), and we now always set ARCH_DMA_ADDR_T_64BIT on mips boards with 64-bit physical addressing instead of only doing it when highmem is set. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: James Hogan <jhogan@kernel.org>
2018-05-09arch: remove the ARCH_PHYS_ADDR_T_64BIT config symbolChristoph Hellwig
Instead select the PHYS_ADDR_T_64BIT for 32-bit architectures that need a 64-bit phys_addr_t type directly. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: James Hogan <jhogan@kernel.org>