Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
Allwinner V3s has a DMA engine similar to the ones from A31, but with
fewer channels and DRQs.
Add support for it.
Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Acked-by: Chen-Yu Tsai <wens@csie.org>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Originally we enable a special gate bit when the compatible indicates
A23/33.
But according to BSP sources and user manuals, more SoCs will need this
gate bit.
So make it a common quirk configured in the config struct.
Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Reviewed-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Renesas R-Car V3M (R8A77970) SoC also has the R-Car gen2/3 compatible DMA
controllers, so document the SoC specific binding.
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Acked-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
'err' is a signed int and error codes are typically negative numbers, so
use '%d' instead of '%u' to format the error code in the error message.
Fixes: ba16db36b5dd ("dmaengine: vdma: Add clock support")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This patch moves from a struct declaration for the DMA controller
registers to macros with offests to the base address. This is mainly
done to remove the sparse warnings, since the function parameter of
ioread32/iowrite32 is "void __iomem *" instead of a pointer to struct
members. With this patch applied, no sparse warning is seen anymore.
Please note that the struct for the descriptors is still kept in place,
as the code largely accesses the struct members as internal variables
before the complete struct is copied into the descriptor FIFO of the
DMA controller.
Additionally this patch also removes two warnings "variable xxx set but
not used" seen when compiling with "W=1". The registers need to be read
to flush the response FIFO, but nothing needs to be done with them. So
the code is correct here and the warning is a false one.
Signed-off-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
DMA crossbar uses 'xbar->dma_inuse' variable to manage allocated routes.
Each bit represents respective DMA channel. If the channel is free, bit
is set to '0', if channel is allocated, bit should be set to '1'.
In reserve function, the bits for requested DMA channels are cleared, so
they are not really reserved, but freed and become ready for allocation.
Signed-off-by: Alexander Smirnov <asmirnov@ilbers.de>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The SBA_REQUEST_STATE_COMPLETED state was added to keep track
of sba_request which got completed but cannot be freed because
underlying Async Tx descriptor was not ACKed by DMA client.
Instead of above, we can free the sba_request with non-ACKed
Async Tx descriptor and sba_alloc_request() will ensure that
it always allocates sba_request with ACKed Async Tx descriptor.
This alternate approach makes SBA_REQUEST_STATE_COMPLETED state
redundant hence this patch removes it.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
We should explicitly ACK mailbox message because after
sending message we can know the send status via error
attribute of brcm_message.
This will also help SBA-RAID to use "txdone_ack" method
whenever mailbox controller supports it.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This patch adds debugfs support to report stats via debugfs
which in-turn will help debug hang or error situations.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The SBA_REQUEST_STATE_RECEIVED state is now redundant because
received sba_request are immediately freed or moved to completed
list in sba_process_received_request().
This patch removes redundant SBA_REQUEST_STATE_RECEIVED state.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Currently, sba_process_deferred_requests() handles both pending
and completed sba_request which is unnecessary overhead for
sba_issue_pending() because completed sba_request handling is
not required in sba_issue_pending().
This patch breaks sba_process_deferred_requests() into two parts
sba_process_received_request() and _sba_process_pending_requests().
The sba_issue_pending() will only process pending sba_request
by calling _sba_process_pending_requests(). This will improve
sba_issue_pending().
The sba_receive_message() will only process received sba_request
by calling sba_process_received_request() for each received
sba_request. The sba_process_received_request() will also call
_sba_process_pending_requests() after handling received sba_request
because we might have pending sba_request not submitted by previous
call to sba_issue_pending().
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
We should pre-ack async tx descriptor at time of
allocating sba_request (just like other RAID drivers).
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
When setting up RAID array on several NVMe disks we observed that
sba_alloc_request() start failing (due to no free requests left)
and RAID array setup becomes very slow.
To improve performance, we do mbox channel peek when we have
no free requests. This improves performance of RAID array setup
because mbox requests that were completed but not processed by
mbox completion worker will be processed immediately by mbox
channel peek.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
We should allocate DMA channel resources before registering the
DMA device in sba_probe() because we can get DMA request soon
after registering the DMA device. If DMA channel resources are
not allocated before first DMA request then SBA-RAID driver will
crash.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The pending sba_request list can become very long in real-life usage
(e.g. setting up RAID array) which can cause sba_issue_pending() to
run for long duration.
This patch adds common sba_process_deferred_requests() to process
few completed and pending requests so that it finishes in short
duration. We use this common sba_process_deferred_requests() in
both sba_issue_pending() and sba_receive_message().
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Currently, we have only 1024 free sba_request created
by sba_prealloc_channel_resources(). This is too low
and the prep_xxx() callbacks start failing more often
at time of RAID array setup over NVMe disks.
This patch sets number of free sba_request created by
sba_prealloc_channel_resources() to be:
<number_of_mailbox_channels> x 8192
Due to above, we will have sufficient number of free
sba_request and prep_xxx() callbacks failing is very
unlikely.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Currently, we cannot have any arbitrary number of free sba_request
because sba_prealloc_channel_resources() allocates an array of
sba_request using devm_kcalloc() and kcalloc() cannot provide
memory beyond certain size.
This patch removes "reqs" (sba_request array) from sba_device
and makes "cmds" as variable array (instead of pointer) in
sba_request. This helps sba_prealloc_channel_resources() to
allocate sba_request and associated SBA command in one allocation
which in-turn allows arbitrary number of free sba_request.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The reqs_free_count member of sba_device is not used anywhere
hence no point in tracking number of free sba_request.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Both resp and resp_dma are redundant in sba_request because
resp is unused and resp_dma carries same information present
in tx.phys of sba_request. This patch removes both resp and
resp_dma from sba_request.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
The next_count in sba_request is redundant because same information
is captured by next_pending_count. This patch removes next_count
from sba_request.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This patch merges sba_request state and fence into common
sba_request flags. The sba_request flags not only saves
memory but it can also be extended in-future without adding
new members.
We also make each sba_request state as separate bit in
sba_request flags to help debugging situations where a
sba_request is accidently in two states.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
We don't require to hold "sba->reqs_lock" for long-time
in sba_alloc_request() because lock protection is not
required when initializing members of "struct sba_request".
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This patch does following improvments to comments:
1. Make section comments consistent across the driver by
avoiding " SBA " in some of the comments
2. Add/update few more section comments
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Anton Volkov noticed that engine->dev is NULL before
of_dma_controller_register() in probe.
Thus there might be a NULL pointer dereference in
rcar_dmac_chan_start_xfer while accessing chan->chan.device->dev which
is equal to (&dmac->engine)->dev.
On same reason, same and similar things will happen if we didn't
initialize all necessary data before calling register irq function.
To be more safety code, this patch initialize all necessary data
before calling register irq function.
Reported-by: Anton Volkov <avolkov@ispras.ru>
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Commit 36387a2b1f62b5c087c5fe6f0f7b23b94f722ad7 ("k3dma: Fix
memory handling in preparation for cyclic mode") adds few
calls to ON_WARN_ONCE() to track the use of ds_run/ds_done.
After the two fixes:
- dmaengine: k3dma: fix non-cyclic mode
- dmaengine: k3dma: fix re-free of the same descriptor
the behaviour of ds_run/ds_done is properly fixed.
The remaining ON_WARN_ONCE() are never triggered and can be
removed.
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Commit 36387a2b1f62b5c087c5fe6f0f7b23b94f722ad7 ("k3dma: Fix
memory handling in preparation for cyclic mode") adds code
to free the descriptor in ds_done.
In cyclic mode, ds_done is never used and it's always NULL,
so the added code is not executed.
In non-cyclic mode, ds_done is used as a flag: when not NULL
it signals that the descriptor has been consumed. No need to
free it because it would be free by vchan_complete().
The fix takes back the code changed by the commit above:
- remove the free on the descriptor;
- initialize ds_done to NULL for the next run.
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Commit 36387a2b1f62b5c087c5fe6f0f7b23b94f722ad7 ("k3dma: Fix
memory handling in preparation for cyclic mode") broke the
logic around ds_run/ds_done in case of non-cyclic DMA.
This went unnoticed as the only user of k3dma was the i2s
audio driver but, with a patch set to enable dma on SPI, the
issue popped out.
The fix re-applies the initialization to ds_run/ds_done in
k3_dma_start_txd() that were removed by the commit above.
Also, one of the calls to k3_dma_start_txd() is triggered by
(ds_done != NULL), so remove the noisy and useless call to
WARN_ON_ONCE().
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
We observed performance increase with DMA copy from memory
to MMIO by changing the interrupt coalescing value to 0.
The previous set value was projected on the C5xxx Xeon
platform and no longer holds true. Removing hard coded
value and providing a tune-able in sysfs in order to allow
user to tune this on a per channel basis. By default this
value will be set to 0.
Example of sysfs variable importing for interrupt coalescing
value from command line:
echo 5> /sys/devices/pci0000:00/0000:00:04.0/dma/dma0chan0/
quickdata/intr_coalesce
Reported-by: Nithin Sujir <nsujir@tintri.com>
Signed-off-by: Ujjal Singh <ujjal.singh@intel.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Commit c678fa66341c: ("dmaengine: remove DMA_SG as it is dead code in
kernel") removes DMA_SG from dmaengine subsystem but missed the newly added
driver, so remove it from here as well
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
clk_prepare_enable() can fail here and we must check its return value.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Replace '%d' by '%zu' to fix the compilation warning:-
"format ‘%d’ expects argument of type ‘int’,but argument has type ‘size_t’ [-Wformat=]"
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Make these const as they are only used during a copy operation.
Done using Coccinelle.
Signed-off-by: Bhumika Goyal <bhumirks@gmail.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This patch adds R-Car M3-W device tree bindings for usb-dmac driver.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
ABI document added to describe all sysfs variables
for dma
Signed-off-by: Ujjal Singh <ujjal.singh@intel.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This driver builds with warnings which can be fixed by making these
functions static.
CC [M] drivers/dma/bcm-sba-raid.o
drivers/dma/bcm-sba-raid.c:786:1: warning: no previous prototype for ‘sba_prep_dma_xor_req’ [-Wmissing-prototypes]
sba_prep_dma_xor_req(struct sba_device *sba,
^
drivers/dma/bcm-sba-raid.c:995:1: warning: no previous prototype for ‘sba_prep_dma_pq_req’ [-Wmissing-prototypes]
sba_prep_dma_pq_req(struct sba_device *sba, dma_addr_t off,
^
drivers/dma/bcm-sba-raid.c:1247:1: warning: no previous prototype for ‘sba_prep_dma_pq_single_req’ [-Wmissing-prototypes]
sba_prep_dma_pq_single_req(struct sba_device *sba, dma_addr_t off,
^
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
Now that we have a custom printf format specifier, convert users of
full_name to use %pOF instead. This is preparation to remove storing
of the full path string for each node.
Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
This driver adds support for the Altera / Intel modular Scatter-Gather
Direct Memory Access (mSGDMA) intellectual property (IP) to the Linux
DMAengine subsystem. Currently it supports the following op modes:
- DMA_MEMCPY
- DMA_SG
- DMA_SLAVE
This implementation has been tested on an Altera Cyclone FPGA connected
via PCIe, both on an ARM and an x86 platform.
Signed-off-by: Stefan Roese <sr@denx.de>
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
pci_device_id are not supposed to change at runtime. All functions
working with pci_device_id provided by <linux/pci.h> work with
const pci_device_id. So mark the non-const structs as const.
File size before:
text data bss dec hex filename
12582 3056 16 15654 3d26 drivers/dma/ioat/init.o
File size After adding 'const':
text data bss dec hex filename
14773 865 16 15654 3d26 drivers/dma/ioat/init.o
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
|
|
Pull documentation format standardization from Jonathan Corbet:
"This series converts a number of top-level documents to the RST format
without incorporating them into the Sphinx tree. The hope is to bring
some uniformity to kernel documentation and, perhaps more importantly,
have our existing docs serve as an example of the desired formatting
for those that will be added later.
Mauro has gone through and fixed up a lot of top-level documentation
files to make them conform to the RST format, but without moving or
renaming them in any way. This will help when we incorporate the ones
we want to keep into the Sphinx doctree, but the real purpose is to
bring a bit of uniformity to our documentation and let the top-level
docs serve as examples for those writing new ones"
* tag 'standardize-docs' of git://git.lwn.net/linux: (84 commits)
docs: kprobes.txt: Fix whitespacing
tee.txt: standardize document format
cgroup-v2.txt: standardize document format
dell_rbu.txt: standardize document format
zorro.txt: standardize document format
xz.txt: standardize document format
xillybus.txt: standardize document format
vfio.txt: standardize document format
vfio-mediated-device.txt: standardize document format
unaligned-memory-access.txt: standardize document format
this_cpu_ops.txt: standardize document format
svga.txt: standardize document format
static-keys.txt: standardize document format
smsc_ece1099.txt: standardize document format
SM501.txt: standardize document format
siphash.txt: standardize document format
sgi-ioc4.txt: standardize document format
SAK.txt: standardize document format
rpmsg.txt: standardize document format
robust-futexes.txt: standardize document format
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random
Pull random updates from Ted Ts'o:
"Add wait_for_random_bytes() and get_random_*_wait() functions so that
callers can more safely get random bytes if they can block until the
CRNG is initialized.
Also print a warning if get_random_*() is called before the CRNG is
initialized. By default, only one single-line warning will be printed
per boot. If CONFIG_WARN_ALL_UNSEEDED_RANDOM is defined, then a
warning will be printed for each function which tries to get random
bytes before the CRNG is initialized. This can get spammy for certain
architecture types, so it is not enabled by default"
* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
random: reorder READ_ONCE() in get_random_uXX
random: suppress spammy warnings about unseeded randomness
random: warn when kernel uses unseeded randomness
net/route: use get_random_int for random counter
net/neighbor: use get_random_u32 for 32-bit hash random
rhashtable: use get_random_u32 for hash_rnd
ceph: ensure RNG is seeded before using
iscsi: ensure RNG is seeded before use
cifs: use get_random_u32 for 32-bit lock random
random: add get_random_{bytes,u32,u64,int,long,once}_wait family
random: add wait_for_random_bytes() API
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull ->s_options removal from Al Viro:
"Preparations for fsmount/fsopen stuff (coming next cycle). Everything
gets moved to explicit ->show_options(), killing ->s_options off +
some cosmetic bits around fs/namespace.c and friends. Basically, the
stuff needed to work with fsmount series with minimum of conflicts
with other work.
It's not strictly required for this merge window, but it would reduce
the PITA during the coming cycle, so it would be nice to have those
bits and pieces out of the way"
* 'work.mount' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
isofs: Fix isofs_show_options()
VFS: Kill off s_options and helpers
orangefs: Implement show_options
9p: Implement show_options
isofs: Implement show_options
afs: Implement show_options
affs: Implement show_options
befs: Implement show_options
spufs: Implement show_options
bpf: Implement show_options
ramfs: Implement show_options
pstore: Implement show_options
omfs: Implement show_options
hugetlbfs: Implement show_options
VFS: Don't use save/replace_mount_options if not using generic_show_options
VFS: Provide empty name qstr
VFS: Make get_filesystem() return the affected filesystem
VFS: Clean up whitespace in fs/namespace.c and fs/super.c
Provide a function to create a NUL-terminated string from unterminated data
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull more __copy_.._user elimination from Al Viro.
* 'work.__copy_to_user' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
drm_dp_aux_dev: switch to read_iter/write_iter
|