Age | Commit message (Collapse) | Author |
|
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Plain code move with no changes. Needed for refactoring. Also, looks
nicer if request and finish_request are next to each other.
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
The obviously wrong comment was added in 2011 with commit df3ef2d3c92c0a
("mmc: protect the tmio_mmc driver against a theoretical race") but
already obsoleted half a year later with commit b9269fdd4f61aa ("mmc:
tmio: fix recursive spinlock, don't schedule with interrupts disabled").
Fixes: b9269fdd4f61aa ("mmc: tmio: fix recursive spinlock, don't schedule with interrupts disabled")
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Split handling mrq into a seperate function. We need to call it from
another place soon.
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
This part confused me and I had to read it twice until I got it. Let's
follow the standard pattern to bail out if something is wrong and keep
in the body of the function when everything is as expected.
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Omit an extra message for memory allocation failures.
This issue was detected by using the Coccinelle software.
Link: http://events.linuxfoundation.org/sites/events/files/slides/LCJ16-Refactor_Strings-WSang_0.pdf
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Omit an extra message for a memory allocation failure.
This issue was detected by using the Coccinelle software.
Link: http://events.linuxfoundation.org/sites/events/files/slides/LCJ16-Refactor_Strings-WSang_0.pdf
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
commit cdf8a6fb48882651049e468e6b16956fb83db86c
"mmc: block: Introduce queue semantics"
deleted the last user of mmc_req_is_special() and it was
a horrible hack to classify requests as "special" or
"not special" to begin with, so delete the helper.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
This switches also the multiple-command ioctl() call to issue
all ioctl()s through the block layer instead of going directly
to the device.
We extend the passed argument with an argument count and loop
over all passed commands in the ioctl() issue function called
from the block layer.
By doing this we are again loosening the grip on the big host
lock, since two calls to mmc_get_card()/mmc_put_card() are
removed.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Avri Altman <Avri.Altman@sandisk.com>
|
|
This wraps single ioctl() commands into block requests using
the custom block layer request types REQ_OP_DRV_IN and
REQ_OP_DRV_OUT.
By doing this we are loosening the grip on the big host lock,
since two calls to mmc_get_card()/mmc_put_card() are removed.
We are storing the ioctl() in/out argument as a pointer in
the per-request struct mmc_blk_request container. Since we
now let the block layer allocate this data, blk_get_request()
will allocate it for us and we can immediately dereference
it and use it to pass the argument into the block layer.
We refactor the if/else/if/else ladder in mmc_blk_issue_rq()
as part of the job, keeping some extra attention to the
case when a NULL req is passed into this function and
making that pipeline flush more explicit.
Tested on the ux500 with the userspace:
mmc extcsd read /dev/mmcblk3
resulting in a successful EXTCSD info dump back to the
console.
This commit fixes a starvation issue in the MMC/SD stack
that can be easily provoked in the following way by
issueing the following commands in sequence:
> dd if=/dev/mmcblk3 of=/dev/null bs=1M &
> mmc extcs read /dev/mmcblk3
Before this patch, the extcsd read command would hang
(starve) while waiting for the dd command to finish since
the block layer was holding the card/host lock.
After this patch, the extcsd ioctl() command is nicely
interpersed with the rest of the block commands and we
can issue a bunch of ioctl()s from userspace while there
is some busy block IO going on without any problems.
Conversely userspace ioctl()s can no longer starve
the block layer by holding the card/host lock.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Avri Altman <Avri.Altman@sandisk.com>
|
|
The variable is_rpmb is clearly a bool and even assigned true
and false, yet declared as an int.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
The mmc_queue_req is a per-request state container the MMC core uses
to carry bounce buffers, pointers to asynchronous requests and so on.
Currently allocated as a static array of objects, then as a request
comes in, a mmc_queue_req is assigned to it, and used during the
lifetime of the request.
This is backwards compared to how other block layer drivers work:
they usally let the block core provide a per-request struct that get
allocated right beind the struct request, and which can be obtained
using the blk_mq_rq_to_pdu() helper. (The _mq_ infix in this function
name is misleading: it is used by both the old and the MQ block
layer.)
The per-request struct gets allocated to the size stored in the queue
variable .cmd_size initialized using the .init_rq_fn() and
cleaned up using .exit_rq_fn().
The block layer code makes the MMC core rely on this mechanism to
allocate the per-request mmc_queue_req state container.
Doing this make a lot of complicated queue handling go away. We only
need to keep the .qnct that keeps count of how many request are
currently being processed by the MMC layer. The MQ block layer will
replace also this once we transition to it.
Doing this refactoring is necessary to move the ioctl() operations
into custom block layer requests tagged with REQ_OP_DRV_[IN|OUT]
instead of the custom code using the BigMMCHostLock that we have
today: those require that per-request data be obtainable easily from
a request after creating a custom request with e.g.:
struct request *rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM);
struct mmc_queue_req *mq_rq = req_to_mq_rq(rq);
And this is not possible with the current construction, as the request
is not immediately assigned the per-request state container, but
instead it gets assigned when the request finally enters the MMC
queue, which is way too late for custom requests.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
[Ulf: Folded in the fix to drop a call to blk_cleanup_queue()]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Heiner Kallweit <hkallweit1@gmail.com>
|
|
This option is activated by all multiplatform configs and what
not so we almost always have it turned on, and the memory it
saves is negligible, even more so moving forward. The actual
bounce buffer only gets allocated only when used, the only
thing the ifdefs are saving is a little bit of code.
It is highly improper to have this as a Kconfig option that
get turned on by Kconfig, make this a pure runtime-thing and
let the host decide whether we use bounce buffers. We add a
new property "disable_bounce" to the host struct.
Notice that mmc_queue_calc_bouncesz() already disables the
bounce buffers if host->max_segs != 1, so any arch that has a
maximum number of segments higher than 1 will have bounce
buffers disabled.
The option CONFIG_MMC_BLOCK_BOUNCE is default y so the
majority of platforms in the kernel already have it on, and
it then gets turned off at runtime since most of these have
a host->max_segs > 1. The few exceptions that have
host->max_segs == 1 and still turn off the bounce buffering
are those that disable it in their defconfig.
Those are the following:
arch/arm/configs/colibri_pxa300_defconfig
arch/arm/configs/zeus_defconfig
- Uses MMC_PXA, drivers/mmc/host/pxamci.c
- Sets host->max_segs = NR_SG, which is 1
- This needs its bounce buffer deactivated so we set
host->disable_bounce to true in the host driver
arch/arm/configs/davinci_all_defconfig
- Uses MMC_DAVINCI, drivers/mmc/host/davinci_mmc.c
- This driver sets host->max_segs to MAX_NR_SG, which is 16
- That means this driver anyways disabled bounce buffers
- No special action needed for this platform
arch/arm/configs/lpc32xx_defconfig
arch/arm/configs/nhk8815_defconfig
arch/arm/configs/u300_defconfig
- Uses MMC_ARMMMCI, drivers/mmc/host/mmci.[c|h]
- This driver by default sets host->max_segs to NR_SG,
which is 128, unless a DMA engine is used, and in that case
the number of segments are also > 1
- That means this driver already disables bounce buffers
- No special action needed for these platforms
arch/arm/configs/sama5_defconfig
- Uses MMC_SDHCI, MMC_SDHCI_PLTFM, MMC_SDHCI_OF_AT91, MMC_ATMELMCI
- Uses drivers/mmc/host/sdhci.c
- Normally sets host->max_segs to SDHCI_MAX_SEGS which is 128 and
thus disables bounce buffers
- Sets host->max_segs to 1 if SDHCI_USE_SDMA is set
- SDHCI_USE_SDMA is only set by SDHCI on PCI adapers
- That means that for this platform bounce buffers are already
disabled at runtime
- No special action needed for this platform
arch/blackfin/configs/CM-BF533_defconfig
arch/blackfin/configs/CM-BF537E_defconfig
- Uses MMC_SPI (a simple MMC card connected on SPI pins)
- Uses drivers/mmc/host/mmc_spi.c
- Sets host->max_segs to MMC_SPI_BLOCKSATONCE which is 128
- That means this platform already disables bounce buffers at
runtime
- No special action needed for these platforms
arch/mips/configs/cavium_octeon_defconfig
- Uses MMC_CAVIUM_OCTEON, drivers/mmc/host/cavium.c
- Sets host->max_segs to 16 or 1
- Setting host->disable_bounce to be sure for the 1 case
arch/mips/configs/qi_lb60_defconfig
- Uses MMC_JZ4740, drivers/mmc/host/jz4740_mmc.c
- This sets host->max_segs to 128 so bounce buffers are
already runtime disabled
- No action needed for this platform
It would be interesting to come up with a list of the platforms
that actually end up using bounce buffers. I have not been
able to infer such a list, but it occurs when
host->max_segs == 1 and the bounce buffering is not explicitly
disabled.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Make renesas_sdhi_sys_dmac.c a top-level module file that makes use of
library code supplied by renesas_sdhi_core.c
This is in order to facilitate adding other variants of SDHI;
in particular SDHI using different DMA controllers.
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[Arnd: Fixed module build error]
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
Rename the source file SDHI. A follow-up patch will make it a library
file used by a different top-level module file.
The name "renesas" is chosen as the SDHI driver is applicable to a wider
range of SoCs than SH-Mobile it seems to be a more appropriate name.
However, the SDHI driver source itself, is left as sh_mobile_sdhi to
avoid unnecessary churn.
the name "core" was chosen to reflect the desired role of this file,
to provide core functionality to the sdhi driver. A follow-up patch will
move the file into that role.
Internal symbols have also been renamed to reflect the filename change.
The .name member of struct platform_driver and parameter to
MODULE_ALIAS() have not been changed in order to avoid the complication
of potentially breaking SH SoCs which still use platform drivers.
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Rename the source file for DMA for SDHI as a follow-up to attaching
DMA code to the SDHI driver rather than the tmio_core driver.
The name "renesas" is chosen as the SDHI driver is applicable to a wider
range of SoCs than SH-Mobile it seems to be a more appropriate name.
However, the SDHI driver source itself, is left as sh_mobile_sdhi to
avoid unnecessary churn.
The name sys_dmac was chosen to reflect the type of DMA used.
Internal symbols have also been renamed to reflect the filename change.
A follow-up patch will re-organise the SDHI driver removing
the need for renesas_sdhi_get_dma_ops().
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Rename tmio_mmc_pio.c to tmio_mmc_core.c to more accurately reflect its
function: to provide core code for the tmio-mmc and sh-mobole-sdhi drivers.
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Refactor DMA support to allow it to be provided by a set of call-backs
that are provided by a host driver. The motivation is to allow multiple
DMA implementations to be provided and instantiated at run-time.
Instantiate the existing DMA implementation from the sh_mobile_sdhi driver
which appears to match the current use-case. This has the side effect
of moving the DMA code from the tmio_core to the sh_mobile_sdhi driver.
A follow-up patch will change the source file for the SDHI DMA
implementation accordingly. Another follow-up patch will re-organise the
SDHI driver removing the need for tmio_mmc_get_dma_ops().
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Reshuffle the comment at the top of the source
dropping filenames and moving up human readable strings.
This seems to be somewhat more useful information to start the
source file with. It is also less fragile, f.e. to file renames.
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
This reverts commit a6db2c86033b ("mmc: dw_mmc: Don't allow Runtime PM for
SDIO cards")'
As dw_mmc now is capable of preventing runtime PM suspend while SDIO IRQs
are enabled, let's drop the less fine-grained method, which is preventing
runtime PM suspend for all SDIO cards - no matter of whether SDIO IRQs are
being enabled or not.
In this way we don't keep the host runtime PM resumed, unless it's really
needed, thus avoiding to waste power.
Especially when SDIO IRQs is supported via a separate out-of-band IRQ line,
which isn't defined by the SDIO standard, typically the SDIO func driver
doesn't enable SDIO IRQs via sdio_claim_irq(). So, for these cases we can
now allow the dwmmc device to be runtime PM suspended in-between requests.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
|
|
To be able to handle SDIO IRQs the dw_mmc device needs to be powered and
providing clock to the SDIO card. Therefore, we must not allow the device
to be runtime PM suspended while SDIO IRQs are enabled.
To fix this, let's increase the runtime PM usage count while the mmc core
enables SDIO IRQs. Later when the mmc core tells dw_mmc to disable SDIO
IRQs, we drop the usage count to again allow runtime PM suspend.
This now becomes the default behaviour for dw_mmc. In cases where SDIO IRQs
can be re-routed as GPIO wake-ups during runtime PM suspend, one could
potentially allow runtime PM suspend. However, that will have to be
addressed as a separate change on top of this one.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
|
|
Convert to use the more lightweight method for processing SDIO IRQs, which
involves the following changes:
- Enable MMC_CAP2_SDIO_IRQ_NOTHREAD when SDIO IRQ is supported and use
sdio_signal_irq() instead of mmc_signal_sdio_irq().
- Mask the SDIO IRQ before signaling a new one to be processed.
- Implement the ->ack_sdio_irq() callback to unmask the SDIO IRQ.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
|
|
For hosts not supporting MMC_CAP2_SDIO_IRQ_NOTHREAD but MMC_CAP_SDIO_IRQ,
the SDIO IRQs are processed from a dedicated kernel thread. For these
cases, the host calls mmc_signal_sdio_irq() from its ISR to signal a new
SDIO IRQ.
Signaling an SDIO IRQ makes the host's ->enable_sdio_irq() callback to be
invoked to temporary disable the IRQs, before the kernel thread is woken up
to process it. When processing of the IRQs are completed, they are
re-enabled by the kernel thread, again via invoking the host's
->enable_sdio_irq().
The observation from this, is that the execution path is being unnecessary
complex, as the host driver already knows that it needs to temporary
disable the IRQs before signaling a new one. Moreover, replacing the kernel
thread with a work/workqueue would not only greatly simplify the code, but
also make it more robust.
To address the above problems, let's continue to build upon the support for
MMC_CAP2_SDIO_IRQ_NOTHREAD, as it already implements SDIO IRQs to be
processed without using the clumsy kernel thread and without the ping-pong
calls of the host's ->enable_sdio_irq() callback for each processed IRQ.
Therefore, let's add new API sdio_signal_irq(), which enables hosts to
signal/process SDIO IRQs by using a work/workqueue, rather than using the
kernel thread.
Add also a new host callback ->ack_sdio_irq(), which the work invokes when
the SDIO IRQs have been processed. This informs the host about when it
shall re-enable the SDIO IRQs. Potentially, we could re-use the existing
->enable_sdio_irq() callback instead of adding a new one, however it has
turned out that it's more convenient for hosts to get this information via
a separate callback.
Hosts that wants to use this new method to signal/process SDIO IRQs, must
enable MMC_CAP2_SDIO_IRQ_NOTHREAD and implement the ->ack_sdio_irq()
callback.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
|
|
In cases when MMC_CAP2_SDIO_IRQ_NOTHREAD is set, there is a minor window
for when the mmc host could call sdio_run_irqs(), while in fact an SDIO
func driver could have decided to released the SDIO IRQ via a call to
sdio_release_irq(). In this scenario, processing of the SDIO IRQs are done
even if there is none IRQ claimed, which is not what we want.
To prevent this from happen, close the window by validating that at least
one SDIO IRQs is claimed, before deciding to process them.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
|
|
In case if a pwrseq-emmc has been bound to the host, a call to
mmc_power_up() triggers an eMMC HW reset via the pwrseq_emmc's
->post_power_on() callback. This isn't really what we want, as
mmc_power_up() is called each time when resuming the card.
As a matter of fact, the current approach may also violate the eMMC spec,
as the involved delays managed in pwrseq_emmc assumes both VCC and VCCQ has
been turned on, which isn't the case for VCCQ, unless the regulator is
always on.
Fix this behaviour by aligning to the same procedure used when the mmc host
implements the ->hw_reset() callback and has the MMC_CAP_HW_RESET flag set.
In this way the eMMC HW reset is issued at card detection scan, to cope
with bogus bootloaders and in the error recovery path via the mmc specific
bus_ops->reset() callback.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
|
|
The ->reset() callback is needed to implement a better support for eMMC HW
reset. The following changes will take advantage of the new callback.
Suggested-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
|
|
Add the missing endianness conversions when printing the USB
device-descriptor idVendor and idProduct fields during probe.
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
AVR32 is gone. Now it's time to clean up the driver by removing
leftovers that was used by AVR32 related code.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
At the end of either of the read or write loops len is always zero
and hence the non-zero check on len and return of -EIO is redundant
and can be removed.
Detected by CoverityScan, CID#114293 ("Logically dead code")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
ret is signed however is printed as unsigned fix the same.
If printed as a negative number the result is easier to read.
No functional change.
Signed-off-by: Shubhrajyoti Datta <shubhrajyoti.datta@xilinx.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
The guid intel_dsm_guid does not need to be in global scope, so make it
static.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
The null check functions do not and must not modify contents of the UUID
or GUID supplied.
Mark argument explicitly to reflect that.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Broxton-T was a forgotten child and we didn't apply the quirks for
Skylake+ properly. Meanwhile, a quirk for reducing the DMA latency
seems specific to the early Broxton model, so we leave as is.
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
With the port_window support in DMAengine and the sDMA driver we can
convert the driver to DMAengine.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
The external request lines are used by tusb6010 on OMAP24xx platforms.
Update the map so the driver can use dmaengine API to request the DMA
channel. At the same time add temporary map containing only the external
DMA request numbers for DT booted case on omap24xx since the tusb6010 stack
is not yet supports DT boot.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Handle the DMA TX in a similar way as we do for the RX: in the DMA
completion callback.
Since we are no longer using DMA completion interrupt for the TX we can as
wall keep these interrupts disabled, but keep the handler for debug
purposes.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Instead of requesting the DMA channel in tusb_omap_dma_allocate() do it
when the controller is created and in runtime work from the DMA channel
pool.
This change is needed for the DMAengine conversion of the driver since the
tusb_omap_dma_allocate() is called in interrupt context which might lead
to lock within the DMAengine API when requesting channel.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
For the DMA we have ch (channel), dmareq and sync_dev parameters both
within the tusb_omap_dma_ch and tusb_omap_dma struct.
By creating a common struct the code can be simplified when selecting
between the shared or multichannel DMA parameters.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Having one musb_ep_select() instead the two calls in if/else is the same
thing, but makes the code a bit simpler to follow.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
When using the g_ncm for networking this flag will make sure that the
buffer is aligned to 32bit so the DMA can be used to offload the data
movement.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
For tusb6010 the DMA functionality only possible if the buffer is 32bit
aligned (SYNC access to FIFO) since with ASYNC access the TX/RX offset
registers will corrupt eventually.
The MUSB_G_NO_SKB_RESERVE will set the quirk_avoids_skb_reserve flag in
usb_gadget struct to provide correctly aligned buffer.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
When the port_window support was verified it was done on setup where only
the MEM_TO_DEV direction was enabled. This got un-noticed and thus only
this direction worked.
Now that I have managed to get a setup to verify both direction it turned
out that the setup was incorrect:
omap_desc members are settings for the slave port while the omap_sg members
apply to the memory side of the sDMA setup.
Fixes: 527a27591312 ("dmaengine: omap-dma: Fix the port_window support")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: dmaengine@vger.kernel.org
Cc: dan.j.williams@intel.com
Cc: vinod.koul@intel.com
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
If dma_request_slave_channel() failed to return a channel,
then the driver will print an error and request to defer probe,
regardless of the cause of the failure.
Defer if the DMA is not ready yet otherwise print an error.
Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
Reviewed-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-testing
Felipe writes:
usb: changes for v4.13 merge window
This time around we have a total of 57 non-merge commits. A list of
most important changes follows:
- Improvements to dwc3 tracing interface
- Initial dual-role support for dwc3
- Improvements to how we handle DMA resources in dwc3
- A new f_uac1 implementation which much more flexible
- Removal of AVR32 bits
- Improvements to f_mass_storage driver
|
|
Without this quirk, the touchpad is not responsive on this product, with
the following message repeated in the logs:
psmouse serio1: bad data from KBC - timeout
Add it to the notimeout list alongside other similar Fujitsu laptops.
Signed-off-by: Daniel Drake <drake@endlessm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux
Pull clk fixes from Stephen Boyd:
"One build fix for an Amlogic clk driver and a handful of Allwinner clk
driver fixes for some DT bindings and a randconfig build error that
all came in this merge window"
* tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
clk: sunxi-ng: a64: Export PLL_PERIPH0 clock for the PRCM
clk: sunxi-ng: h3: Export PLL_PERIPH0 clock for the PRCM
dt-bindings: clock: sunxi-ccu: Add pll-periph to PRCM's needed clocks
clk: sunxi-ng: sun5i: Fix ahb_bist_clk definition
clk: sunxi-ng: enable SUNXI_CCU_MP for PRCM
clk: meson: gxbb: fix build error without RESET_CONTROLLER
clk: sunxi-ng: v3s: Fix usb otg device reset bit
clk: sunxi-ng: a31: Correct lcd1-ch1 clock register offset
|
|
Pull NTB fixes from Jon Mason:
"NTB bug fixes to address the modinfo in ntb_perf, a couple of bugs in
the NTB transport QP calculations, skx doorbells, and sleeping in
ntb_async_tx_submit"
* tag 'ntb-4.12-bugfixes' of git://github.com/jonmason/ntb:
ntb: no sleep in ntb_async_tx_submit
ntb: ntb_hw_intel: Skylake doorbells should be 32bits, not 64bits
ntb_transport: fix bug calculating num_qps_mw
ntb_transport: fix qp count bug
NTB: ntb_test: fix bug printing ntb_perf results
ntb: Correct modinfo usage statement for ntb_perf
|
|
This patch handle the stable UART bindings but also keeps compatibility
with the legacy non-stable bindings until all boards uses them.
Reviewed-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: Helmut Klein <hgkr.klein@gmail.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Add the documentation for the device tree binding of Amlogic Meson Serial UART.
Signed-off-by: Helmut Klein <hgkr.klein@gmail.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Commit e4fda3a04275 ("serial: don't register CIR serial ports") adds a
check for PORT_8250_CIR to serial8250_register_8250_port(). But the code
isn't needed as the function never takes the branch when the port is CIR
serial port.
This patch deletes the dead code.
Signed-off-by: Matthias Brugger <mbrugger@suse.com>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|