summaryrefslogtreecommitdiff
path: root/drivers/dma
AgeCommit message (Collapse)Author
2019-02-01PCI: Move Rohm Vendor ID to generic listAndy Shevchenko
Move the Rohm Vendor ID to pci_ids.h instead of defining it in several drivers. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Mark Brown <broonie@kernel.org> Acked-by: Linus Walleij <linus.walleij@linaro.org>
2019-01-20dmaengine: usb-dmac: Make DMAC system sleep callbacks explicitPhuong Nguyen
This commit fixes the issue that USB-DMAC hangs silently after system resumes on R-Car Gen3 hence renesas_usbhs will not work correctly when using USB-DMAC for bulk transfer e.g. ethernet or serial gadgets. The issue can be reproduced by these steps: 1. modprobe g_serial 2. Suspend and resume system. 3. connect a usb cable to host side 4. Transfer data from Host to Target 5. cat /dev/ttyGS0 (Target side) 6. echo "test" > /dev/ttyACM0 (Host side) The 'cat' will not result anything. However, system still can work normally. Currently, USB-DMAC driver does not have system sleep callbacks hence this driver relies on the PM core to force runtime suspend/resume to suspend and reinitialize USB-DMAC during system resume. After the commit 17218e0092f8 ("PM / genpd: Stop/start devices without pm_runtime_force_suspend/resume()"), PM core will not force runtime suspend/resume anymore so this issue happens. To solve this, make system suspend resume explicit by using pm_runtime_force_{suspend,resume}() as the system sleep callbacks. SET_NOIRQ_SYSTEM_SLEEP_PM_OPS() is used to make sure USB-DMAC suspended after and initialized before renesas_usbhs." Signed-off-by: Phuong Nguyen <phuong.nguyen.xw@renesas.com> Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com> Cc: <stable@vger.kernel.org> # v4.16+ [shimoda: revise the commit log and add Cc tag] Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: imx-sdma: pass ->dev to dma_alloc_coherent() APIAndy Duan
Pass ->dev to dma_alloc_coherent() API. We need this because dma_alloc_coherent() makes use of dev parameter and receiving NULL will result in a crash. Signed-off-by: Andy Duan <fugang.duan@nxp.com> Signed-off-by: Daniel Baluta <daniel.baluta@nxp.com> Reviewed-by: Robin Gong <yibin.gong@nxp.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: imx-dma: change return of 'imxdma_sg_next' to voidVinod Koul
The return value of function 'imxdma_sg_next' is not checked anywhere, so make it void return type. Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: imx-dma: change variable 'now' type to size_tVinod Koul
now is used to keep size and it is better to change the variable type to size_t Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: imx-dma: fix warning comparison of distinct pointer typesAnders Roxell
The warning got introduced by commit 930507c18304 ("arm64: add basic Kconfig symbols for i.MX8"). Since it got enabled for arm64. The warning haven't been seen before since size_t was 'unsigned int' when built on arm32. ../drivers/dma/imx-dma.c: In function ‘imxdma_sg_next’: ../include/linux/kernel.h:846:29: warning: comparison of distinct pointer types lacks a cast (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1))) ^~ ../include/linux/kernel.h:860:4: note: in expansion of macro ‘__typecheck’ (__typecheck(x, y) && __no_side_effects(x, y)) ^~~~~~~~~~~ ../include/linux/kernel.h:870:24: note: in expansion of macro ‘__safe_cmp’ __builtin_choose_expr(__safe_cmp(x, y), \ ^~~~~~~~~~ ../include/linux/kernel.h:879:19: note: in expansion of macro ‘__careful_cmp’ #define min(x, y) __careful_cmp(x, y, <) ^~~~~~~~~~~~~ ../drivers/dma/imx-dma.c:288:8: note: in expansion of macro ‘min’ now = min(d->len, sg_dma_len(sg)); ^~~ Rework so that we use min_t and pass in the size_t that returns the minimum of two values, using the specified type. Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Acked-by: Olof Johansson <olof@lixom.net> Reviewed-by: Fabio Estevam <festevam@gmail.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: xilinx_dma: remove set but not used variable 'tail_segment'YueHaibing
Fixes gcc '-Wunused-but-set-variable' warning: drivers/dma/xilinx/xilinx_dma.c: In function 'xilinx_vdma_start_transfer': drivers/dma/xilinx/xilinx_dma.c:1104:33: warning: variable 'tail_segment' set but not used [-Wunused-but-set-variable] It not used since commit b8349172b400 ("dmaengine: xilinx_dma: Drop SG support for VDMA IP") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: axi-dmac: Use struct_size() in kzalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: timb_dma: Use struct_size() in kzalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: tegra210-adma: Use struct_size() in devm_kzalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20Merge branch 'topic/qcom' into for-linusVinod Koul
2019-01-20dmaengine: qcom_hidma: assign channel cookie correctlyShunyong Yang
When dma_cookie_complete() is called in hidma_process_completed(), dma_cookie_status() will return DMA_COMPLETE in hidma_tx_status(). Then, hidma_txn_is_success() will be called to use channel cookie mchan->last_success to do additional DMA status check. Current code assigns mchan->last_success after dma_cookie_complete(). This causes a race condition of dma_cookie_status() returns DMA_COMPLETE before mchan->last_success is assigned correctly. The race will cause hidma_tx_status() return DMA_ERROR but the transaction is actually a success. Moreover, in async_tx case, it will cause a timeout panic in async_tx_quiesce(). Kernel panic - not syncing: async_tx_quiesce: DMA error waiting for transaction ... Call trace: [<ffff000008089994>] dump_backtrace+0x0/0x1f4 [<ffff000008089bac>] show_stack+0x24/0x2c [<ffff00000891e198>] dump_stack+0x84/0xa8 [<ffff0000080da544>] panic+0x12c/0x29c [<ffff0000045d0334>] async_tx_quiesce+0xa4/0xc8 [async_tx] [<ffff0000045d03c8>] async_trigger_callback+0x70/0x1c0 [async_tx] [<ffff0000048b7d74>] raid_run_ops+0x86c/0x1540 [raid456] [<ffff0000048bd084>] handle_stripe+0x5e8/0x1c7c [raid456] [<ffff0000048be9ec>] handle_active_stripes.isra.45+0x2d4/0x550 [raid456] [<ffff0000048beff4>] raid5d+0x38c/0x5d0 [raid456] [<ffff000008736538>] md_thread+0x108/0x168 [<ffff0000080fb1cc>] kthread+0x10c/0x138 [<ffff000008084d34>] ret_from_fork+0x10/0x18 Cc: Joey Zheng <yu.zheng@hxt-semitech.com> Reviewed-by: Sinan Kaya <okaya@kernel.org> Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-20dmaengine: qcom_hidma: initialize tx flags in hidma_prep_dma_*Shunyong Yang
In async_tx_test_ack(), it uses flags in struct dma_async_tx_descriptor to check the ACK status. As hidma reuses the descriptor in a free list when hidma_prep_dma_*(memcpy/memset) is called, the flag will keep ACKed if the descriptor has been used before. This will cause a BUG_ON in async_tx_quiesce(). kernel BUG at crypto/async_tx/async_tx.c:282! Internal error: Oops - BUG: 0 1 SMP ... task: ffff8017dd3ec000 task.stack: ffff8017dd3e8000 PC is at async_tx_quiesce+0x54/0x78 [async_tx] LR is at async_trigger_callback+0x98/0x110 [async_tx] This patch initializes flags in dma_async_tx_descriptor by the flags passed from the caller when hidma_prep_dma_*(memcpy/memset) is called. Cc: Joey Zheng <yu.zheng@hxt-semitech.com> Reviewed-by: Sinan Kaya <okaya@kernel.org> Signed-off-by: Shunyong Yang <shunyong.yang@hxt-semitech.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-08dmaengine: dw-axi-dmac: Fix trivia typoAndy Shevchenko
Field name ststus_hi should be spelled as status_hi. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-08dmaengine: imx-sdma: refine to load context only onceRobin Gong
The context loaded only one time before channel running,but currently sdma_config_channel() and dma_prep_* duplicated with sdma_load_context(), so refine it to load context only one time before channel running and reload after the channel terminated. Signed-off-by: Robin Gong <yibin.gong@nxp.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-08cross-tree: phase out dma_zalloc_coherent()Luis Chamberlain
We already need to zero out memory for dma_alloc_coherent(), as such using dma_zalloc_coherent() is superflous. Phase it out. This change was generated with the following Coccinelle SmPL patch: @ replace_dma_zalloc_coherent @ expression dev, size, data, handle, flags; @@ -dma_zalloc_coherent(dev, size, handle, flags) +dma_alloc_coherent(dev, size, handle, flags) Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> [hch: re-ran the script on the latest tree] Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-01-07dmaengine: fsl-edma: use struct_size() in kzalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Tested-by: Angelo Dureghello <angelo@sysam.it> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: tegra-apb: Use struct_size() in devm_kzalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: qcom: bam_dma: use struct_size() in kzalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: st_fdma: use struct_size() in kzalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: Patrice Chotard <patrice.chotard@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: dma-jz4780: Use struct_size() in devm_kzalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = devm_kzalloc(dev, sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = devm_kzalloc(dev, struct_size(instance, entry, count), GFP_KERNEL); This issue was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: bcm2835: Use struct_size() in kzalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; void *entry[]; }; instance = kzalloc(sizeof(struct foo) + sizeof(void *) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: dw: convert to SPDX identifiersAndy Shevchenko
This patch updates license to use SPDX-License-Identifier instead of verbose license text. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: dw: Don't pollute CTL_LO on iDMA 32-bitAndy Shevchenko
Intel iDMA 32-bit doesn't have a concept of bus masters and thus there is no need to setup any kind of masters in the CTL_LO register. Moreover, the burst size for memory-to-memory transfer is not what is says, we need to have a corrected list of possible sizes. Note, that the size of 8 items, each of that up to 4 bytes, is chosen because of maximum of 1/2 FIFO, which is 64 bytes on Intel Merrifield. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: dw: Reset DRAIN bit when resume the channelAndy Shevchenko
For Intel iDMA 32-bit the channel can be drained on a suspend. We need to reset the bit on the resume to return a status quo. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: dw: Split DW and iDMA 32-bit operationsAndy Shevchenko
Here is a kinda big refactoring that should have been done in the first place, when Intel iDMA 32-bit support appeared. It splits operations which are different to Synopsys DesignWare and Intel iDMA 32-bit controllers. No functional change intended. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: dw: Remove unused internal propertyAndy Shevchenko
All known devices, which use DT for configuration, support memory-to-memory transfers. So enable it by default. The rest two cases, i.e. Intel Quark and PPC460ex, instantiate DMA driver and use its channels exclusively for hardware, which means there is no available channel for any other purposes anyway. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: dw: Remove misleading is_private propertyAndy Shevchenko
The commit a9ddb575d6d6 ("dmaengine: dw_dmac: Enhance device tree support") introduces is_private property in uncertain understanding what does it mean. First of all, documentation defines DMA_PRIVATE capability as Documentation/crypto/async-tx-api.txt: The DMA_PRIVATE capability flag is used to tag dma devices that should not be used by the general-purpose allocator. It can be set at initialization time if it is known that a channel will always be private. Alternatively, it is set when dma_request_channel() finds an unused "public" channel. A couple caveats to note when implementing a driver and consumer: 1/ Once a channel has been privately allocated it will no longer be considered by the general-purpose allocator even after a call to dma_release_channel(). 2/ Since capabilities are specified at the device level a dma_device with multiple channels will either have all channels public, or all channels private. Documentation/driver-api/dmaengine/provider.rst: - DMA_PRIVATE The devices only supports slave transfers, and as such isn't available for async transfers. The capability had been introduced by the commit 59b5ec21446b ("dmaengine: introduce dma_request_channel and private channels") and some code didn't changed from that times ever. Taking into consideration above and the fact that on all known platforms Synopsys DesignWare DMA engine is attached to serve slave transfers, the DMA_PRIVATE capability must be enabled for this device unconditionally. Otherwise, as rightfully noticed in drivers/dma/at_xdmac.c: /* * Without DMA_PRIVATE the driver is not able to allocate more than * one channel, second allocation fails in private_candidate. */ because of of a caveats mentioned in above documentation excerpts. So, remove conditional around DMA_PRIVATE followed by removal leftovers. If someone wonders, DMA_PRIVATE can be not used if and only if the all channels of the DMA controller are supposed to serve memory-to-memory like operations. For example, EP93xx has two controllers, one of which can only perform memory-to-memory transfers Note, this change doesn't affect dmatest to be able to test such controllers. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (maintainer:SERIAL DRIVERS) Cc: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: dw: Add missed multi-block support for iDMA 32-bitAndy Shevchenko
Intel integrated DMA 32-bit support multi-block transfers. Add missed setting to the platform data. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: xilinx_dma: Drop SG support for VDMA IPAndrea Merello
xilinx_vdma_start_transfer() is used only for VDMA IP, still it contains conditional code on has_sg variable. has_sg is set only whenever the HW does support SG mode, that is never true for VDMA IP. This patch drops the never-taken branches. Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: xilinx_dma: autodetect whether the HW supports scatter-gatherAndrea Merello
The AXIDMA and CDMA HW can be either direct-access or scatter-gather version. These are SW incompatible. The driver can handle both versions: a DT property was used to tell the driver whether to assume the HW is in scatter-gather mode. This patch makes the driver to autodetect this information. The DT property is not required anymore. No changes for VDMA. Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: devicetree@vger.kernel.org Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: xilinx_dma: program hardware supported buffer lengthRadhey Shyam Pandey
AXI-DMA IP supports configurable (c_sg_length_width) buffer length register width, hence read buffer length (xlnx,sg-length-width) DT property and ensure that driver doesn't program buffer length exceeding the supported limit. For VDMA and CDMA there is no change. Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: devicetree@vger.kernel.org Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Andrea Merello <andrea.merello@gmail.com> [rebase, reword] Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: xilinx_dma: in axidma slave_sg and dma_cyclic mode align split ↵Andrea Merello
descriptors Whenever a single or cyclic transaction is prepared, the driver could eventually split it over several SG descriptors in order to deal with the HW maximum transfer length. This could end up in DMA operations starting from a misaligned address. This seems fatal for the HW if DRE (Data Realignment Engine) is not enabled. This patch eventually adjusts the transfer size in order to make sure all operations start from an aligned address. Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: xilinx_dma: commonize DMA copy size calculationAndrea Merello
This patch removes a bit of duplicated code by introducing a new function that implements calculations for DMA copy size, and prepares for changes to the copy size calculation that will happen in following patches. Suggested-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: tegra: add tracepoints to driverBen Dooks
Add some trace-points to the driver to allow for debuging via the trace pipe. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: tegra: reduce channel name field sizeBen Dooks
The name field is used for "apbdma.%d" which is rarely going to be more than 10 bytes, so reduce the size from 30 to 12. This is only being used by the interrupt registration, so is not critical to the operation of the driver either. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Acked-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: tegra: fix incorrect case of DMABen Dooks
The use of Dma is annoying, since it is an acronym so should be all upper case. Fix this throughout the driver. Reviewed-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Acked-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: tegra: make byte counters unsigned intBen Dooks
The buffer byte request length and counter are declared as signed integers but the values should never be below zero, so make these unsigned integers instead. Reviewed-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: tegra: avoid overflow of byte trackingBen Dooks
The dma_desc->bytes_transferred counter tracks the number of bytes moved by the DMA channel. This is then used to calculate the information passed back in the in the tegra_dma_tx_status callback, which is usually fine. When the DMA channel is configured as continous, then the bytes_transferred counter will increase over time and eventually overflow to become negative so the residue count will become invalid and the ALSA sound-dma code will report invalid hardware pointer values to the application. This results in some users becoming confused about the playout position and putting audio data in the wrong place. To fix this issue, always ensure the bytes_transferred field is modulo the size of the request. We only do this for the case of the cyclic transfer done ISR as anyone attempting to move 2GiB of DMA data in one transfer is unlikely. Note, we don't fix the issue that we should /never/ transfer a negative number of bytes so we could make those fields unsigned. Reviewed-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Acked-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: stm32-mdma: Add PM Runtime supportPierre-Yves MORDRET
Use pm_runtime engine for clock management purpose Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: stm32-dmamux: Add PM Runtime supportPierre-Yves MORDRET
Use pm_runtime engine for clock management purpose. Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: stm32-dma: Add PM Runtime supportPierre-Yves MORDRET
Use pm_runtime engine for clock management purpose. Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: stm32-dma: check FIFO error interrupt enablePierre-Yves MORDRET
For avoiding false FIFO detection, check FIFO Error interrupt is enabled prior raising any errors. This will prevent having spurious FIFO error where it shouldn't. Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: stm32-mdma: Add a check on read_u32_arrayAditya Pakki
In stm32_mdma_probe, after reading the property "st,ahb-addr-masks", the second call is not checked for failure. This time of check to time of use case of "count" error is sent upstream. Signed-off-by: Aditya Pakki <pakki001@umn.edu> Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: qcom_hidma: Check for driver register failureAditya Pakki
While initializing the driver, the function platform_driver_register can fail and return an error. Consistent with other invocations, this patch returns the error upstream. Signed-off-by: Aditya Pakki <pakki001@umn.edu> Acked-by: Sinan Kaya <okaya@kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: mv_xor: Fix a missing check in mv_xor_channel_addAditya Pakki
dma_async_device_register() may fail and return an error. The capabilities checked in mv_xor_channel_add() are not complete. The fix handles the error by freeing the resources. Signed-off-by: Aditya Pakki <pakki001@umn.edu> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: fsl-qdma: add MODULE_LICENSEArnd Bergmann
The newly added driver lacks a MODULE_LICENSE tag, which now produces a warning: WARNING: modpost: missing MODULE_LICENSE() in drivers/dma/fsl-qdma.o Add the license according to the SPDX specifier. Fixes: 75628c149b0d ("dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCsPeng Ma
NXP Queue DMA controller(qDMA) on Layerscape SoCs supports channel virtuallization by allowing DMA jobs to be enqueued into different command queues. Signed-off-by: Wen He <wen.he_1@nxp.com> Signed-off-by: Jiaheng Fan <jiaheng.fan@nxp.com> Signed-off-by: Peng Ma <peng.ma@nxp.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: fsldma: Adding macro FSL_DMA_IN/OUT implement for ARM platformPeng Ma
This patch add the macro FSL_DMA_IN/OUT implement for ARM platform. Signed-off-by: Wen He <wen.he_1@nxp.com> Signed-off-by: Peng Ma <peng.ma@nxp.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07dmaengine: fsldma: Replace DMA_IN/OUT by FSL_DMA_IN/OUTWen He
This patch implement a standard macro call functions is used to NXP dma drivers. Signed-off-by: Wen He <wen.he_1@nxp.com> Signed-off-by: Peng Ma <peng.ma@nxp.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>