summaryrefslogtreecommitdiff
path: root/Documentation/driver-api/dmaengine/client.rst
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/driver-api/dmaengine/client.rst')
-rw-r--r--Documentation/driver-api/dmaengine/client.rst21
1 files changed, 17 insertions, 4 deletions
diff --git a/Documentation/driver-api/dmaengine/client.rst b/Documentation/driver-api/dmaengine/client.rst
index 2104830a99ae..d491e385d61a 100644
--- a/Documentation/driver-api/dmaengine/client.rst
+++ b/Documentation/driver-api/dmaengine/client.rst
@@ -5,7 +5,7 @@ DMA Engine API Guide
Vinod Koul <vinod dot koul at intel.com>
.. note:: For DMA Engine usage in async_tx please see:
- ``Documentation/crypto/async-tx-api.txt``
+ ``Documentation/crypto/async-tx-api.rst``
Below is a guide to device driver writers on how to use the Slave-DMA API of the
@@ -80,13 +80,19 @@ The details of these operations are:
- slave_sg: DMA a list of scatter gather buffers from/to a peripheral
+ - peripheral_dma_vec: DMA an array of scatter gather buffers from/to a
+ peripheral. Similar to slave_sg, but uses an array of dma_vec
+ structures instead of a scatterlist.
+
- dma_cyclic: Perform a cyclic DMA operation from/to a peripheral till the
operation is explicitly stopped.
- interleaved_dma: This is common to Slave as well as M2M clients. For slave
address of devices' fifo could be already known to the driver.
Various types of operations could be expressed by setting
- appropriate values to the 'dma_interleaved_template' members.
+ appropriate values to the 'dma_interleaved_template' members. Cyclic
+ interleaved DMA transfers are also possible if supported by the channel by
+ setting the DMA_PREP_REPEAT transfer flag.
A non-NULL return of this transfer API represents a "descriptor" for
the given transaction.
@@ -100,6 +106,11 @@ The details of these operations are:
unsigned int sg_len, enum dma_data_direction direction,
unsigned long flags);
+ struct dma_async_tx_descriptor *dmaengine_prep_peripheral_dma_vec(
+ struct dma_chan *chan, const struct dma_vec *vecs,
+ size_t nents, enum dma_data_direction direction,
+ unsigned long flags);
+
struct dma_async_tx_descriptor *dmaengine_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_data_direction direction);
@@ -118,7 +129,9 @@ The details of these operations are:
.. code-block:: c
- nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
+ struct device *dma_dev = dmaengine_get_dma_device(chan);
+
+ nr_sg = dma_map_sg(dma_dev, sgl, sg_len);
if (nr_sg == 0)
/* error */
@@ -171,7 +184,7 @@ The details of these operations are:
driver can ask for the pointer, maximum size and the currently used size of
the metadata and can directly update or read it.
- Becasue the DMA driver manages the memory area containing the metadata,
+ Because the DMA driver manages the memory area containing the metadata,
clients must make sure that they do not try to access or get the pointer
after their transfer completion callback has run for the descriptor.
If no completion callback has been defined for the transfer, then the