summaryrefslogtreecommitdiff
path: root/drivers/dma
AgeCommit message (Collapse)Author
2017-10-08dmaengine: Add STM32 MDMA driverPierre-Yves MORDRET
This patch adds the driver for the STM32 MDMA controller. Master Direct memory access (MDMA) is used in order to provide high-speed data transfer between memory and memory or between peripherals and memory. MDMA controller provides a master AXI interface for main memory and peripheral registers access (system access port) and a master AHB interface only for Cortex-M7 TCM memory access (TCM access port). MDMA works in conjunction with the standard DMA controllers (DMA1 or DMA2). It offers up to 64 channels, each dedicated to managing memory access requests from one of the DMA stream memory buffer or other peripherals (w/ integrated FIFO). Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com> Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-28dmaengine: altera: fix spinlock usageSylvain Lesne
Since this lock is acquired in both process and IRQ context, failing to to disable IRQs when trying to acquire the lock in process context can lead to deadlocks. Signed-off-by: Sylvain Lesne <lesne@alse-fr.com> Reviewed-by: Stefan Roese <sr@denx.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-28dmaengine: altera: fix response FIFO emptyingSylvain Lesne
Commit 6084fc2ec478 ("dmaengine: altera: Use macros instead of structs to describe the registers") introduced a minus sign before a register offset. This leads to soft-locks of the DMA controller, since reading the last status byte is required to pop the response from the FIFO. Failing to do so will lead to a full FIFO, which means that the DMA controller will stop processing descriptors. Signed-off-by: Sylvain Lesne <lesne@alse-fr.com> Reviewed-by: Stefan Roese <sr@denx.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-27dmaengine: sa11x0: add DMA filtersRussell King
Add DMA filters for the sa11x0 DMA channels. This will allow us to migrate away from directly using the DMA filter function in drivers. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-27dmaengine: Add STM32 DMAMUX driverPierre-Yves MORDRET
This patch implements the STM32 DMAMUX driver. The DMAMUX request multiplexer allows routing a DMA request line between the peripherals and the DMA controllers of the product. The routing function is ensured by a programmable multi-channel DMA request line multiplexer. Each channel selects a unique DMA request line, unconditionally or synchronously with events from its DMAMUX synchronization inputs. The DMAMUX may also be used as a DMA request generator from programmable events on its input trigger signals Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com> Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-25dmaengine: qcom-bam: Process multiple pending descriptorsSricharan R
The bam dmaengine has a circular FIFO to which we add hw descriptors that describes the transaction. The FIFO has space for about 4096 hw descriptors. Currently we add one descriptor and wait for it to complete with interrupt and then add the next pending descriptor. In this way, the FIFO is underutilized since only one descriptor is processed at a time, although there is space in FIFO for the BAM to process more. Instead keep adding descriptors to FIFO till its full, that allows BAM to continue to work on the next descriptor immediately after signalling completion interrupt for the previous descriptor. Also when the client has not set the DMA_PREP_INTERRUPT for a descriptor, then do not configure BAM to trigger a interrupt upon completion of that descriptor. This way we get a interrupt only for the descriptor for which DMA_PREP_INTERRUPT was requested and there signal completion of all the previous completed descriptors. So we still do callbacks for all requested descriptors, but just that the number of interrupts are reduced. CURRENT: ------ ------- --------------- |DES 0| |DESC 1| |DESC 2 + INT | ------ ------- --------------- | | | | | | INTERRUPT: (INT) (INT) (INT) CALLBACK: (CB) (CB) (CB) MTD_SPEEDTEST READ PAGE: 3560 KiB/s MTD_SPEEDTEST WRITE PAGE: 2664 KiB/s IOZONE READ: 2456 KB/s IOZONE WRITE: 1230 KB/s bam dma interrupts (after tests): 96508 CHANGE: ------ ------- ------------- |DES 0| |DESC 1 |DESC 2 + INT | ------ ------- -------------- | | (INT) (CB for 0, 1, 2) MTD_SPEEDTEST READ PAGE: 3860 KiB/s MTD_SPEEDTEST WRITE PAGE: 2837 KiB/s IOZONE READ: 2677 KB/s IOZONE WRITE: 1308 KB/s bam dma interrupts (after tests): 58806 Signed-off-by: Sricharan R <sricharan@codeaurora.org> Reviewed-by: Andy Gross <andy.gross@linaro.org> Tested-by: Abhishek Sahu <absahu@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21dmaengine: ti-dma-crossbar: Fix possible race condition with dma_inusePeter Ujfalusi
When looking for unused xbar_out lane we should also protect the set_bit() call with the same mutex to protect against concurrent threads picking the same ID. Fixes: ec9bfa1e1a796 ("dmaengine: ti-dma-crossbar: dra7: Use bitops instead of idr") Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Cc: stable@vger.kernel.org Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21dmaengine: sun6i: use of_device_get_match_dataCorentin Labbe
The usage of of_device_get_match_data reduce the code size a bit. Furthermore, it prevents an improbable dereference when of_match_device() return NULL. Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21dmaengine: edma: Align the memcpy acnt array size with the transferPeter Ujfalusi
Memory to Memory transfers does not have any special alignment needs regarding to acnt array size, but if one of the areas are in memory mapped regions (like PCIe memory), we need to make sure that the acnt array size is aligned with the mem copy parameters. Before "dmaengine: edma: Optimize memcpy operation" change the memcpy was set up in a different way: acnt == number of bytes in a word based on __ffs((src | dest | len), bcnt and ccnt for looping the necessary number of words to comlete the trasnfer. Instead of reverting the commit we can fix it to make sure that the ACNT size is aligned to the traswnfer. Fixes: df6694f80365a (dmaengine: edma: Optimize memcpy operation) Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Cc: stable@vger.kernel.org Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21dmaengine: imx-sdma: Correct src_addr_widths and directionsNicolin Chen
The driver already supports DMA_DEV_TO_DEV in sdma_config(), DMA_SLAVE_BUSWIDTH_2_BYTES and DMA_SLAVE_BUSWIDTH_1_BYTE in sdma_prep_slave_sg(). So this patch adds them to the lists. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-17dmaengine: xilinx_dma: Move enum xdma_ip_type to driver fileLars-Peter Clausen
The enum xdma_ip_type is only used inside the Xilinx DMA driver and not exported to any consumers (nor should it be). So move it from the global header to driver file itself. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-17dmaengine: axi-dmac: Fix software cyclic modeLars-Peter Clausen
When running in software cyclic mode the driver currently does not go back to the first segment once the last segment has been reached. Effectively making the transfer non-cyclic. Fix this by going back to the first segment once the last segment has been reached for cyclic transfers. Special care need to be taken to avoid a segment from being submitted multiple times concurrently, which could happen for transfers with a number of segments that is smaller than the DMA controller's internal queue. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-17dmaengine: axi-dmac: Only use hardware cyclic mode for single segment transfersLars-Peter Clausen
In hardware cyclic mode the submitted segment is repeated. This means hardware cyclic mode can only be used if the transfer has a single segment. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-07Merge tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds
Pull dmaengine updates from Vinod Koul: "This one features the usual updates to the drivers and one good part of removing DA_SG from core as it has no users. Summary: - Remove DMA_SG support as we have no users for this feature - New driver for Altera / Intel mSGDMA IP core - Support for memset in dmatest and qcom_hidma driver - Update for non cyclic mode in k3dma, bunch of update in bam_dma, bcm sba-raid - Constify device ids across drivers" * tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (52 commits) dmaengine: sun6i: support V3s SoC variant dmaengine: sun6i: make gate bit in sun8i's DMA engines a common quirk dmaengine: rcar-dmac: document R8A77970 bindings dmaengine: xilinx_dma: Fix error code format specifier dmaengine: altera: Use macros instead of structs to describe the registers dmaengine: ti-dma-crossbar: Fix dra7 reserve function dmaengine: pl330: constify amba_id dmaengine: pl08x: constify amba_id dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_COMPLETED dmaengine: bcm-sba-raid: Explicitly ACK mailbox message after sending dmaengine: bcm-sba-raid: Add debugfs support dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_RECEIVED dmaengine: bcm-sba-raid: Re-factor sba_process_deferred_requests() dmaengine: bcm-sba-raid: Pre-ack async tx descriptor dmaengine: bcm-sba-raid: Peek mbox when we have no free requests dmaengine: bcm-sba-raid: Alloc resources before registering DMA device dmaengine: bcm-sba-raid: Improve sba_issue_pending() run duration dmaengine: bcm-sba-raid: Increase number of free sba_request dmaengine: bcm-sba-raid: Allow arbitrary number free sba_request dmaengine: bcm-sba-raid: Remove reqs_free_count from sba_device ...
2017-09-06Merge branch 'topic/dmatest' into for-linusVinod Koul
2017-09-06Merge branch 'topic/qcom' into for-linusVinod Koul
2017-09-06Merge branch 'topic/ppc4xx' into for-linusVinod Koul
2017-09-06Merge branch 'topic/of' into for-linusVinod Koul
2017-09-06Merge branch 'topic/k3dma' into for-linusVinod Koul
2017-09-06Merge branch 'topic/ioat' into for-linusVinod Koul
2017-09-06Merge branch 'topic/bcm' into for-linusVinod Koul
2017-09-06Merge branch 'topic/altera' into for-linusVinod Koul
2017-09-05dmaengine: sun6i: support V3s SoC variantIcenowy Zheng
Allwinner V3s has a DMA engine similar to the ones from A31, but with fewer channels and DRQs. Add support for it. Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz> Acked-by: Chen-Yu Tsai <wens@csie.org> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-05dmaengine: sun6i: make gate bit in sun8i's DMA engines a common quirkIcenowy Zheng
Originally we enable a special gate bit when the compatible indicates A23/33. But according to BSP sources and user manuals, more SoCs will need this gate bit. So make it a common quirk configured in the config struct. Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz> Reviewed-by: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-05dmaengine: xilinx_dma: Fix error code format specifierLars-Peter Clausen
'err' is a signed int and error codes are typically negative numbers, so use '%d' instead of '%u' to format the error code in the error message. Fixes: ba16db36b5dd ("dmaengine: vdma: Add clock support") Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-29dmaengine: altera: Use macros instead of structs to describe the registersStefan Roese
This patch moves from a struct declaration for the DMA controller registers to macros with offests to the base address. This is mainly done to remove the sparse warnings, since the function parameter of ioread32/iowrite32 is "void __iomem *" instead of a pointer to struct members. With this patch applied, no sparse warning is seen anymore. Please note that the struct for the descriptors is still kept in place, as the code largely accesses the struct members as internal variables before the complete struct is copied into the descriptor FIFO of the DMA controller. Additionally this patch also removes two warnings "variable xxx set but not used" seen when compiling with "W=1". The registers need to be read to flush the response FIFO, but nothing needs to be done with them. So the code is correct here and the warning is a false one. Signed-off-by: Stefan Roese <sr@denx.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: ti-dma-crossbar: Fix dra7 reserve functionAlexander Smirnov
DMA crossbar uses 'xbar->dma_inuse' variable to manage allocated routes. Each bit represents respective DMA channel. If the channel is free, bit is set to '0', if channel is allocated, bit should be set to '1'. In reserve function, the bits for requested DMA channels are cleared, so they are not really reserved, but freed and become ready for allocation. Signed-off-by: Alexander Smirnov <asmirnov@ilbers.de> Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: pl330: constify amba_idArvind Yadav
amba_id are not supposed to change at runtime. All functions working with const amba_id. So mark the non-const structs as const. Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: pl08x: constify amba_idArvind Yadav
amba_id are not supposed to change at runtime. All functions working with const amba_id. So mark the non-const structs as const. Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_COMPLETEDAnup Patel
The SBA_REQUEST_STATE_COMPLETED state was added to keep track of sba_request which got completed but cannot be freed because underlying Async Tx descriptor was not ACKed by DMA client. Instead of above, we can free the sba_request with non-ACKed Async Tx descriptor and sba_alloc_request() will ensure that it always allocates sba_request with ACKed Async Tx descriptor. This alternate approach makes SBA_REQUEST_STATE_COMPLETED state redundant hence this patch removes it. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Ray Jui <ray.jui@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Explicitly ACK mailbox message after sendingAnup Patel
We should explicitly ACK mailbox message because after sending message we can know the send status via error attribute of brcm_message. This will also help SBA-RAID to use "txdone_ack" method whenever mailbox controller supports it. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Ray Jui <ray.jui@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Add debugfs supportAnup Patel
This patch adds debugfs support to report stats via debugfs which in-turn will help debug hang or error situations. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Ray Jui <ray.jui@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_RECEIVEDAnup Patel
The SBA_REQUEST_STATE_RECEIVED state is now redundant because received sba_request are immediately freed or moved to completed list in sba_process_received_request(). This patch removes redundant SBA_REQUEST_STATE_RECEIVED state. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Re-factor sba_process_deferred_requests()Anup Patel
Currently, sba_process_deferred_requests() handles both pending and completed sba_request which is unnecessary overhead for sba_issue_pending() because completed sba_request handling is not required in sba_issue_pending(). This patch breaks sba_process_deferred_requests() into two parts sba_process_received_request() and _sba_process_pending_requests(). The sba_issue_pending() will only process pending sba_request by calling _sba_process_pending_requests(). This will improve sba_issue_pending(). The sba_receive_message() will only process received sba_request by calling sba_process_received_request() for each received sba_request. The sba_process_received_request() will also call _sba_process_pending_requests() after handling received sba_request because we might have pending sba_request not submitted by previous call to sba_issue_pending(). Signed-off-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Pre-ack async tx descriptorAnup Patel
We should pre-ack async tx descriptor at time of allocating sba_request (just like other RAID drivers). Signed-off-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Ray Jui <ray.jui@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Peek mbox when we have no free requestsAnup Patel
When setting up RAID array on several NVMe disks we observed that sba_alloc_request() start failing (due to no free requests left) and RAID array setup becomes very slow. To improve performance, we do mbox channel peek when we have no free requests. This improves performance of RAID array setup because mbox requests that were completed but not processed by mbox completion worker will be processed immediately by mbox channel peek. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Ray Jui <ray.jui@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Alloc resources before registering DMA deviceAnup Patel
We should allocate DMA channel resources before registering the DMA device in sba_probe() because we can get DMA request soon after registering the DMA device. If DMA channel resources are not allocated before first DMA request then SBA-RAID driver will crash. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Improve sba_issue_pending() run durationAnup Patel
The pending sba_request list can become very long in real-life usage (e.g. setting up RAID array) which can cause sba_issue_pending() to run for long duration. This patch adds common sba_process_deferred_requests() to process few completed and pending requests so that it finishes in short duration. We use this common sba_process_deferred_requests() in both sba_issue_pending() and sba_receive_message(). Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Increase number of free sba_requestAnup Patel
Currently, we have only 1024 free sba_request created by sba_prealloc_channel_resources(). This is too low and the prep_xxx() callbacks start failing more often at time of RAID array setup over NVMe disks. This patch sets number of free sba_request created by sba_prealloc_channel_resources() to be: <number_of_mailbox_channels> x 8192 Due to above, we will have sufficient number of free sba_request and prep_xxx() callbacks failing is very unlikely. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Allow arbitrary number free sba_requestAnup Patel
Currently, we cannot have any arbitrary number of free sba_request because sba_prealloc_channel_resources() allocates an array of sba_request using devm_kcalloc() and kcalloc() cannot provide memory beyond certain size. This patch removes "reqs" (sba_request array) from sba_device and makes "cmds" as variable array (instead of pointer) in sba_request. This helps sba_prealloc_channel_resources() to allocate sba_request and associated SBA command in one allocation which in-turn allows arbitrary number of free sba_request. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Remove reqs_free_count from sba_deviceAnup Patel
The reqs_free_count member of sba_device is not used anywhere hence no point in tracking number of free sba_request. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Remove redundant resp_dma from sba_requestAnup Patel
Both resp and resp_dma are redundant in sba_request because resp is unused and resp_dma carries same information present in tx.phys of sba_request. This patch removes both resp and resp_dma from sba_request. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Remove redundant next_count from sba_requestAnup Patel
The next_count in sba_request is redundant because same information is captured by next_pending_count. This patch removes next_count from sba_request. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Common flags for sba_request state and fenceAnup Patel
This patch merges sba_request state and fence into common sba_request flags. The sba_request flags not only saves memory but it can also be extended in-future without adding new members. We also make each sba_request state as separate bit in sba_request flags to help debugging situations where a sba_request is accidently in two states. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Reduce locking context in sba_alloc_request()Anup Patel
We don't require to hold "sba->reqs_lock" for long-time in sba_alloc_request() because lock protection is not required when initializing members of "struct sba_request". Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: bcm-sba-raid: Minor improvments in commentsAnup Patel
This patch does following improvments to comments: 1. Make section comments consistent across the driver by avoiding " SBA " in some of the comments 2. Add/update few more section comments Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: qcom: bam_dma: add command descriptor flagAbhishek Sahu
If DMA_PREP_CMD flag is passed in prep_slave_sg then peripheral driver has passed the data is in BAM command descriptor format and BAM driver should set CMD bit for each of the HW descriptors. Signed-off-by: Abhishek Sahu <absahu@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28dmaengine: remove BUG_ON while registering devicesVinod Koul
DMAengine core has BUG_ON to check for mandatory operations and ones based on capabilities, but they use BUG_ON, so remove and move to error returns and logging the errors gracefully Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-25dmaengine: rcar-dmac: initialize all data before registering IRQ handlerKuninori Morimoto
Anton Volkov noticed that engine->dev is NULL before of_dma_controller_register() in probe. Thus there might be a NULL pointer dereference in rcar_dmac_chan_start_xfer while accessing chan->chan.device->dev which is equal to (&dmac->engine)->dev. On same reason, same and similar things will happen if we didn't initialize all necessary data before calling register irq function. To be more safety code, this patch initialize all necessary data before calling register irq function. Reported-by: Anton Volkov <avolkov@ispras.ru> Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-25dmaengine: k3dma: remove useless ON_WARN_ONCE()Antonio Borneo
Commit 36387a2b1f62b5c087c5fe6f0f7b23b94f722ad7 ("k3dma: Fix memory handling in preparation for cyclic mode") adds few calls to ON_WARN_ONCE() to track the use of ds_run/ds_done. After the two fixes: - dmaengine: k3dma: fix non-cyclic mode - dmaengine: k3dma: fix re-free of the same descriptor the behaviour of ds_run/ds_done is properly fixed. The remaining ON_WARN_ONCE() are never triggered and can be removed. Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>