summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-09-29efi: efivars: limit availability to X86 buildsArd Biesheuvel
CONFIG_EFI_VARS controls the code that exposes EFI variables via sysfs entries, which was deprecated before support for non-Intel architectures was added to EFI. So let's limit its availability to Intel architectures for the time being, and hopefully remove it entirely in the not too distant future. While at it, let's remove the module alias so that the module is no longer loaded automatically. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-29efi: remove some false dependencies on CONFIG_EFI_VARSArd Biesheuvel
Remove some false dependencies on CONFIG_EFI_VARS, which only controls the creation of the sysfs entries, whereas the underlying functionality that these modules rely on is enabled unconditionally when CONFIG_EFI is set. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-29efi: gsmi: fix false dependency on CONFIG_EFI_VARSArd Biesheuvel
The gsmi code does not actually rely on CONFIG_EFI_VARS, since it only uses the efivars abstraction that is included unconditionally when CONFIG_EFI is defined. CONFIG_EFI_VARS controls the inclusion of the code that exposes the sysfs entries, and which has been deprecated for some time. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-29efi: efivars: un-export efivars_sysfs_init()Ard Biesheuvel
efivars_sysfs_init() is only used locally in the source file that defines it, so make it static and unexport it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-29efi: pstore: move workqueue handling out of efivarsArd Biesheuvel
The worker thread that gets kicked off to sync the state of the EFI variable list is only used by the EFI pstore implementation, and is defined in its source file. So let's move its scheduling there as well. Since our efivar_init() scan will bail on duplicate entries, there is no need to disable the workqueue like we did before, so we can run it unconditionally. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-29efi: pstore: disentangle from deprecated efivars moduleArd Biesheuvel
The EFI pstore implementation relies on the 'efivars' abstraction, which encapsulates the EFI variable store in a way that can be overridden by other backing stores, like the Google SMI one. On top of that, the EFI pstore implementation also relies on the efivars.ko module, which is a separate layer built on top of the 'efivars' abstraction that exposes the [deprecated] sysfs entries for each variable that exists in the backing store. Since the efivars.ko module is deprecated, and all users appear to have moved to the efivarfs file system instead, let's prepare for its removal, by removing EFI pstore's dependency on it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-29efi: mokvar-table: fix some issues in new codeArd Biesheuvel
Fix a couple of issues in the new mokvar-table handling code, as pointed out by Arvind and Boris: - don't bother checking the end of the physical region against the start address of the mokvar table, - ensure that we enter the loop with err = -EINVAL, - replace size_t with unsigned long to appease pedantic type equality checks. Reviewed-by: Arvind Sankar <nivedita@alum.mit.edu> Reviewed-by: Lenny Szubowicz <lszubowi@redhat.com> Tested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-29Merge tag 'dmaengine-fix-5.9' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine Pull dmaengine fix from Vinod Koul: "Fix dmatest for misconfigured channel" * tag 'dmaengine-fix-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: dmaengine: dmatest: Prevent to run on misconfigured channel
2020-09-29Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds
Pull virtio fixes from Michael Tsirkin: "A couple of last minute fixes" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: vhost-vdpa: fix backend feature ioctls vhost: Fix documentation
2020-09-29ftrace: Move RCU is watching check after recursion checkSteven Rostedt (VMware)
The first thing that the ftrace function callback helper functions should do is to check for recursion. Peter Zijlstra found that when "rcu_is_watching()" had its notrace removed, it caused perf function tracing to crash. This is because the call of rcu_is_watching() is tested before function recursion is checked and and if it is traced, it will cause an infinite recursion loop. rcu_is_watching() should still stay notrace, but to prevent this should never had crashed in the first place. The recursion prevention must be the first thing done in callback functions. Link: https://lore.kernel.org/r/20200929112541.GM2628@hirez.programming.kicks-ass.net Cc: stable@vger.kernel.org Cc: Paul McKenney <paulmck@kernel.org> Fixes: c68c0fa293417 ("ftrace: Have ftrace_ops_get_func() handle RCU and PER_CPU flags too") Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reported-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-09-29tracing: Fix trace_find_next_entry() accounting of temp buffer sizeSteven Rostedt (VMware)
The temp buffer size variable for trace_find_next_entry() was incorrectly being updated when the size did not change. The temp buffer size should only be updated when it is reallocated. This is mostly an issue when used with ftrace_dump(). That's because ftrace_dump() can not allocate a new buffer, and instead uses a temporary buffer with a fix size. But the variable that keeps track of that size is incorrectly updated with each call, and it could fall into the path that would try to reallocate the buffer and produce a warning. ------------[ cut here ]------------ WARNING: CPU: 1 PID: 1601 at kernel/trace/trace.c:3548 trace_find_next_entry+0xd0/0xe0 Modules linked in [..] CPU: 1 PID: 1601 Comm: bash Not tainted 5.9.0-rc5-test+ #521 Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v03.03 07/14/2016 RIP: 0010:trace_find_next_entry+0xd0/0xe0 Code: 40 21 00 00 4c 89 e1 31 d2 4c 89 ee 48 89 df e8 c6 9e ff ff 89 ab 54 21 00 00 5b 5d 41 5c 41 5d c3 48 63 d5 eb bf 31 c0 eb f0 <0f> 0b 48 63 d5 eb b4 66 0f 1f 84 00 00 00 00 00 53 48 8d 8f 60 21 RSP: 0018:ffff95a4f2e8bd70 EFLAGS: 00010046 RAX: ffffffff96679fc0 RBX: ffffffff97910de0 RCX: ffffffff96679fc0 RDX: ffff95a4f2e8bd98 RSI: ffff95a4ee321098 RDI: ffffffff97913000 RBP: 0000000000000018 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000001 R11: 0000000000000046 R12: ffff95a4f2e8bd98 R13: 0000000000000000 R14: ffff95a4ee321098 R15: 00000000009aa301 FS: 00007f8565484740(0000) GS:ffff95a55aa40000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000055876bd43d90 CR3: 00000000b76e6003 CR4: 00000000001706e0 Call Trace: trace_print_lat_context+0x58/0x2d0 ? cpumask_next+0x16/0x20 print_trace_line+0x1a4/0x4f0 ftrace_dump.cold+0xad/0x12c __handle_sysrq.cold+0x51/0x126 write_sysrq_trigger+0x3f/0x4a proc_reg_write+0x53/0x80 vfs_write+0xca/0x210 ksys_write+0x70/0xf0 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f8565579487 Code: 64 89 02 48 c7 c0 ff ff ff ff eb bb 0f 1f 80 00 00 00 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24 RSP: 002b:00007ffd40707948 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f8565579487 RDX: 0000000000000002 RSI: 000055876bd74de0 RDI: 0000000000000001 RBP: 000055876bd74de0 R08: 000000000000000a R09: 0000000000000001 R10: 000055876bdec280 R11: 0000000000000246 R12: 0000000000000002 R13: 00007f856564a500 R14: 0000000000000002 R15: 00007f856564a700 irq event stamp: 109958 ---[ end trace 7aab5b7e51484b00 ]--- Not only fix the updating of the temp buffer, but also do not free the temp buffer before a new buffer is allocated (there's no reason to not continue to use the current temp buffer if an allocation fails). Cc: stable@vger.kernel.org Fixes: 8e99cf91b99bb ("tracing: Do not allocate buffer in trace_find_next_entry() in atomic") Reported-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-09-29Merge series "spi: dw: Add full Baikal-T1 SPI Controllers support" from ↵Mark Brown
Serge Semin <Sergey.Semin@baikalelectronics.ru>: Originally I intended to merge a dedicated Baikal-T1 System Boot SPI Controller driver into the kernel and leave the DW APB SSI driver untouched. But after a long discussion (see the link at the bottom of the letter) Mark and Andy persuaded me to integrate what we developed there into the DW APB SSI core driver to be useful for another controllers, which may have got the same peculiarities/problems as ours: - No IRQ. - No DMA. - No GPIO CS, so a native CS is utilized. - small Tx/Rx FIFO depth. - Automatic CS assertion/de-assertion. - Slow system bus. All of them have been fixed in the framework of this patchset in some extent at least for the SPI memory operations. As I expected it wasn't that easy and the integration took that many patches as you can see from the subject. Though some of them are mere cleanups or weakly related with the subject fixes, but we just couldn't leave the code as is at some places since we were working with the DW APB SSI driver anyway. Here is what we did to fix the original DW APB SSI driver, to make it less messy. First two patches are just cleanups to simplify the DW APB SSI device initialization a bit. We suggest to discard the IRQ threshold macro as unused and use a ternary operator to initialize the set_cs callback instead of assigning-and-updating it. Then we've discovered that the n_bytes field of the driver private data is used by the DW APB SSI IRQ handler, which requires it to be initialized before the SMP memory barrier and to be visible from another CPUs. Speaking about the SMP memory barrier. Having one right after the shared resources initialization is enough and there is no point in using the spin-lock to protect the Tx/Rx buffer pointers. The protection functionality is redundant there by the driver design. (Though I have a doubt whether the SMP memory barrier is also required there because the normal IO-methods like readl/writel implies a full memory barrier. So any memory operations performed before them are supposed to be seen by devices and another CPUs. See the patch log for details of my concern.) Thirdly we've found out that there is some confusion in the IRQs masking/unmasking/clearing in the SPI-transfer procedure. Multiple interrupts are unmasked on the SPI-transfer initialization, but just TXEI is only masked back on completion. Similarly IRQ status isn't cleared on the controller reset, which actually makes the reset being not full and errors prone in the controller probe procedure. Another very important optimization is using the IO-relaxed accessors in the dw_read_io_reg()/dw_write_io_reg() methods. Since the Tx/Rx FIFO data registers are the most frequently accessible controller resource, using relaxed accessors there will significantly improve the data read/write performance. At least on Baikal-T1 SoC such modification opens up a way to have the DW APB SSI controller working with higher SPI bus speeds, than without it. Fifthly we've made an effort to cleanup the code using the SPI-device private data - chip_data. We suggest to remove the chip type from there since it isn't used and isn't implemented right anyway. Then instead of having a bus speed, clock divider, transfer mode preserved there, and recalculating the CR0 fields of the SPI-device-specific phase, polarity and frame format each time the SPI transfer is requested, we can save it in the chip_data instance. By doing so we'll make that structure finally used as it was supposed to by design (see the spi-fsl-dspi.c, spi-pl022.c, spi-pxa2xx.c drivers for examples). Sixthly instead of having the SPI-transfer specific CR0-update callback, we suggest to implement the DW APB SSI controller capabilities approach. By doing so we can now inject the vendor-specific peculiarities in different parts of the DW APB SSI core driver (which is required to implement both SPI-transfers and the SPI memory operations). This will also make the code less confusing like defining a callback in the core driver, setting it up in the glue layer, then calling it from the core driver again. Seeing the small capabilities implementation embedded in-situ is more readable than tracking the callbacks assignments. This will concern the CS-override, Keembay master setup, DW SSI-specific CR0 registers layout capabilities. Seventhly since there are going to be two types of the transfers implemented in the DW APB SSI core driver, we need a common method to set the controller configuration like, Tx/Rx-mode, bus speed, data frame size and number of data frames to read in case of the memory operations. So we just detached the corresponding code from the SPI-transfer-one method and made it to be a part of the new dw_spi_update_config() function, which is former update_cr0(). Note that the new method will be also useful for the glue drivers, which due to the hardware design need to create their own memory operations (for instance, for the dirmap-operations provided in the Baikal-T System Boot SPI controller driver). Eighthly it is the data IO procedure and IRQ-based SPI-transfer implementation refactoring. The former one will look much simpler if the buffers initial pointers and the buffers length data utilized instead of the Tx/Rx buffers start and end pointers. The later one currently lacks of valid execution at the final stage of the SPI-transfer. So if there is no data left to send, but there is still data which needs to be received, the Tx FIFO Empty IRQ will constantly happen until all of the requested inbound data is received. So we suggest to fix that by taking the Rx FIFO Empty IRQ into account. Ninthly it's potentially errors prone to enable the DW APB SSI interrupts before enabling the chip. It specifically concerns a case if for some reason the DW APB SSI IRQs handler is executed before the controller is enabled. That will cause a part of the outbound data loss. So we suggest to reverse the order. Tenthly in order to be able to pre-initialize the Tx FIFO with data and only the start the SPI memory operations we need to have any CS de-activated. We'll fulfil that requirement by explicitly clearing the CS on the SPI transfer completion and at the explicit controller reset. Then seeing all the currently available and potentially being created types of the SPI transfers need to perform the DW APB SSI controller status register check and the errors handler procedure, we've created a common method for all of them. Eleventhly if before we've mostly had a series of fixups, cleanups and refactorings, here we've finally come to the new functionality implementation. It concerns the poll-based transfer (as Baikal-T1 System Boot SPI controller lacks a dedicated IRQ lane connected) and the SPI memory operations implementation. If the former feature is pretty much straightforward (see the patch log for details), the later one is a bit tricky. It's based on the EEPROM-read (write-then-read) and the Tx-only modes of the DW APB SSI controller, which as performing the automatic data read and write let's us to implement the faster IO procedure than using the Tx-Rx-mode-based approach. Having the memory-operations implemented that way is the best thing we can currently do to provide the errors-less SPI transfers to SPI devices with native CS attached. Note the approach utilized here to develop the SPI memory operations can be also used to create the "automatic CS toggle problem"-free(ish) SPI transfers (combine SPI-message transfers into two buffers, disable interrupts, push-pull the combined data). But we don't provide a solution in the framework of this patchset. It is a matter of a dedicated one, which we currently don't intend to spend our time on. Finally at the closure of the this patchset you'll find patches, which provide the Baikal-T1-specific DW APB SSI controllers support. The SoC has got three SPI controllers. Two of them are pretty much normal DW APB SSI interfaces: with IRQ, DMA, FIFOs of 64 words depth, 4x CSs. But the third one as being a part of the Baikal-T1 System Boot Controller has got a very limited resources: no IRQ, no DMA, only a single native chip-select and Tx/Rx FIFOs with just 8 words depth available. In order to provide a transparent initial boot code execution the System Boot SPI Controller is also utilized by an vendor-specific IP-block, which exposes an SPI flash memory direct mapping interface. Please see the corresponding patch for details. Link: https://lore.kernel.org/linux-spi/20200508093621.31619-1-Sergey.Semin@baikalelectronics.ru/ [1] "LINUX KERNEL MEMORY BARRIERS", Documentation/memory-barriers.txt, Section "KERNEL I/O BARRIER EFFECTS" Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Cc: Alexey Malahov <Alexey.Malahov@baikalelectronics.ru> Cc: Ramil Zaripov <Ramil.Zaripov@baikalelectronics.ru> Cc: Pavel Parkhomenko <Pavel.Parkhomenko@baikalelectronics.ru> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Lars Povlsen <lars.povlsen@microchip.com> Cc: wuxu.wu <wuxu.wu@huawei.com> Cc: Feng Tang <feng.tang@intel.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: linux-spi@vger.kernel.org Cc: devicetree@vger.kernel.org Cc: linux-kernel@vger.kernel.org Serge Semin (30): spi: dw: Discard IRQ threshold macro spi: dw: Use ternary op to init set_cs callback spi: dw: Initialize n_bytes before the memory barrier Revert: spi: spi-dw: Add lock protect dw_spi rx/tx to prevent concurrent calls spi: dw: Clear IRQ status on DW SPI controller reset spi: dw: Disable all IRQs when controller is unused spi: dw: Use relaxed IO-methods to access FIFOs spi: dw: Discard DW SSI chip type storages spi: dw: Convert CS-override to DW SPI capabilities spi: dw: Add KeemBay Master capability spi: dw: Add DWC SSI capability spi: dw: Detach SPI device specific CR0 config method spi: dw: Update SPI bus speed in a config function spi: dw: Simplify the SPI bus speed config procedure spi: dw: Update Rx sample delay in the config function spi: dw: Add DW SPI controller config structure spi: dw: Refactor data IO procedure spi: dw: Refactor IRQ-based SPI transfer procedure spi: dw: Perform IRQ setup in a dedicated function spi: dw: Unmask IRQs after enabling the chip spi: dw: Discard chip enabling on DMA setup error spi: dw: De-assert chip-select on reset spi: dw: Explicitly de-assert CS on SPI transfer completion spi: dw: Move num-of retries parameter to the header file spi: dw: Add generic DW SSI status-check method spi: dw: Add memory operations support spi: dw: Introduce max mem-ops SPI bus frequency setting spi: dw: Add poll-based SPI transfers support dt-bindings: spi: dw: Add Baikal-T1 SPI Controllers spi: dw: Add Baikal-T1 SPI Controller glue driver .../bindings/spi/snps,dw-apb-ssi.yaml | 33 +- drivers/spi/Kconfig | 29 + drivers/spi/Makefile | 1 + drivers/spi/spi-dw-bt1.c | 339 +++++++++ drivers/spi/spi-dw-core.c | 642 ++++++++++++++---- drivers/spi/spi-dw-dma.c | 16 +- drivers/spi/spi-dw-mmio.c | 36 +- drivers/spi/spi-dw.h | 85 ++- 8 files changed, 960 insertions(+), 221 deletions(-) create mode 100644 drivers/spi/spi-dw-bt1.c -- 2.27.0
2020-09-29spi: spi-dw: Remove extraneous lockingSerge Semin
There is no point in having the commit 19b61392c5a8 ("spi: spi-dw: Add lock protect dw_spi rx/tx to prevent concurrent calls") applied. The commit author made an assumption that the problem with the rx data mismatch was due to the lack of the data protection. While most likely it was caused by the lack of the memory barrier. So having the commit bfda044533b2 ("spi: dw: use "smp_mb()" to avoid sending spi data error") applied would be enough to fix the problem. Indeed the spin unlock operation makes sure each memory operation issued before the release will be completed before it's completed. In other words it works as an implicit one way memory barrier. So having both smp_mb() and the spin_unlock_irqrestore() here is just redundant. One of them would be enough. It's better to leave the smp_mb() since the Tx/Rx buffers consistency is provided by the data transfer algorithm implementation: first we initialize the buffers pointers, then make sure the assignments are visible by the other CPUs by calling the smp_mb(), only after that enable the interrupt, which handler uses the buffers. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-5-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw: Add KeemBay Master capabilitySerge Semin
In a further commit we'll have to get rid of the update_cr0() callback and define a DW SSI capability instead. Since Keem Bay master/slave functionality is controller by the CTRL0 register bitfield, we need to first move the master mode selection into the internal corresponding update_cr0 method, which would be activated by means of the dedicated DW_SPI_CAP_KEEMBAY_MST capability setup. Note this will be also useful if the driver will be ever altered to support the DW SPI slave interface. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-11-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw: Convert CS-override to DW SPI capabilitiesSerge Semin
There are several vendor-specific versions of the DW SPI controllers, each of which may have some peculiarities with respect to the original IP-core. Seeing it has already caused adding flags and a callback into the DW SPI private data, let's introduce a generic capabilities interface to tune the generic DW SPI controller driver up in accordance with the particular controller specifics. It's done by converting a simple Alpine-specific CS-override capability into the DW SPI controller capability activated by setting the DW_SPI_CAP_CS_OVERRIDE flag. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-10-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw: Discard DW SSI chip type storagesSerge Semin
Keeping SPI peripheral devices type is pointless since first it hasn't been functionally utilized by any of the client drivers/code and second it won't work for Microwire type at the very least. Moreover there is no point in setting up the type by means of the chip-data in the modern kernel. The peripheral devices with specific interface type need to be detected in order to activate the corresponding frame format. It most likely will require some peripheral device specific DT property or whatever to find out the interface protocol. So let's remove the serial interface type fields from the DW APB SSI controller and the SPI peripheral device private data. Note we'll preserve the explicit SSI_MOTO_SPI interface type setting up to signify the only currently supported interface protocol. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-9-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw: Use relaxed IO-methods to access FIFOsSerge Semin
In accordance with [1] the relaxed methods are guaranteed to be ordered with respect to other accesses from the same CPU thread to the same peripheral. This is what we need during the data read/write from/to the controller FIFOs being executed within a single IRQ handler or a kernel task. Such optimization shall significantly speed the data reader and writer up. For instance, the relaxed IO-accessors utilization on Baikal-T1 lets the driver to support the SPI memory operations with bus frequency three-fold faster than if normal IO-accessors would be used. [1] "LINUX KERNEL MEMORY BARRIERS", Documentation/memory-barriers.txt, Section "KERNEL I/O BARRIER EFFECTS" Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-8-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw: Disable all IRQs when controller is unusedSerge Semin
It's a good practice to disable all IRQs if a device is fully unused. In our case it is supposed to be done before requesting the IRQ and after the last byte of an SPI transfer is received. In the former case it's required to prevent the IRQ handler invocation before the driver data is fully initialized (which may happen if the IRQs status has been left uncleared before the device is probed). So we just moved the spi_hw_init() method invocation to the earlier stage before requesting the IRQ. In the later case there is just no point in having any of the IRQs enabled between SPI transfers and when there is no SPI message currently being processed. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-7-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw: Clear IRQ status on DW SPI controller resetSerge Semin
It turns out the IRQ status isn't cleared after switching the controller off and getting it back on, which may cause raising false error interrupts if controller has been unsuccessfully used by, for instance, a bootloader before the driver is loaded. Let's explicitly clear the interrupts status in the dedicated controller reset method. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-6-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw: Initialize n_bytes before the memory barrierSerge Semin
Since n_bytes field of the DW SPI private data is also utilized by the IRQ handler, we need to make sure it' initialization is done before the memory barrier. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-4-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw: Discard IRQ threshold macroSerge Semin
The macro has been unused since a half of FIFO length was defined to be a marker of the IRQ. Let's remove it definition. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-2-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Add one-by-one SG list entries transferSerge Semin
In case if at least one of the requested DMA engine channels doesn't support the hardware accelerated SG list entries traverse, the DMA driver will most likely work that around by performing the IRQ-based SG list entries resubmission. That might and will cause a problem if the DMA Tx channel is recharged and re-executed before the Rx DMA channel. Due to non-deterministic IRQ-handler execution latency the DMA Tx channel will start pushing data to the SPI bus before the Rx DMA channel is even reinitialized with the next inbound SG list entry. By doing so the DMA Tx channel will implicitly start filling the DW APB SSI Rx FIFO up, which while the DMA Rx channel being recharged and re-executed will eventually be overflown. In order to solve the problem we have to feed the DMA engine with SG list entries one-by-one. It shall keep the DW APB SSI Tx and Rx FIFOs synchronized and prevent the Rx FIFO overflow. Since in general the SPI tx_sg and rx_sg lists may have different number of entries of different lengths (though total length should match) we virtually split the SG-lists to the set of DMA transfers, which length is a minimum of the ordered SG-entries lengths. The solution described above is only executed if a full-duplex SPI transfer is requested and the DMA engine hasn't provided channels with hardware accelerated SG list traverse capability to handle both SG lists at once. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200920112322.24585-12-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Pass exact data to the DMA submit and wait methodsSerge Semin
In order to use the DMA submission and waiting methods in both generic DMA-based SPI transfer and one-by-one DMA SG entries transmission functions, we need to modify the dw_spi_dma_wait() and dw_spi_dma_submit_tx()/dw_spi_dma_submit_rx() prototypes. So instead of getting the SPI transfer object as the second argument they must accept the exact data structure instances they imply to use. Those are the current transfer length and the SPI bus frequency in case of dw_spi_dma_wait(), and SG list together with number of list entries in case of the DMA Tx/Rx submission methods. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-11-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Move DMAC register cleanup to DMA transfer methodSerge Semin
DW APB SSI DMA driver doesn't use the native SPI core wait API since commit bdbdf0f06337 ("spi: dw: Locally wait for the DMA transfers completion"). Due to that the driver can now clear the DMAC register in a single place synchronously with the DMA transactions completion or failure. After that all the possible code paths are still covered: 1) DMA completion callbacks are executed in case if the corresponding DMA transactions are finished. When they are, one of them will eventually wake the SPI messages pump kernel thread and dw_spi_dma_transfer_all() method will clean the DMAC register as implied by this patch. 2) dma_stop is called when the SPI core detects an error either returned from the transfer_one() callback or set in the SPI message status field. Both types of errors will be noticed by the dw_spi_dma_transfer_all() method. 3) dma_exit is called when either SPI controller driver or the corresponding device is removed. In any case the SPI core will first flush the SPI messages pump kernel thread, so any pending or in-fly SPI transfers will be finished before that. Due to all of that let's simplify the DW APB SSI DMA driver a bit and move the DMAC register cleanup to a single place in the dw_spi_dma_transfer_all() method. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-10-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Detach DMA transfer into a dedicated methodSerge Semin
In order to add an alternative method of DMA-based SPI transfer first we need to detach the currently available one from the common code. Here we move the normal DMA-based SPI transfer execution functionality into a dedicated method. It will be utilized if either the DMA engine supports an unlimited number SG entries or Tx-only SPI transfer is requested. But currently just use it for any SPI transfer. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-9-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Remove DMA Tx-desc passing aroundSerge Semin
It's pointless to pass the Rx and Tx transfers DMA Tx-descriptors, since they are used in the Tx/Rx submit method only. Instead just return the submission status from these methods. This alteration will make the code less complex. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-8-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Check DMA Tx-desc submission statusSerge Semin
We suggest to add the dmaengine_submit() return value test for errors. It has been unnecessary while the driver was expected to be utilized in pair with DW DMAC. But since now the driver can be used with any DMA engine, it might be useful to track the errors on DMA submissions so not miss them and get into an unpredictable driver behaviour. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-7-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Move DMA transfers submission to the channels prep methodsSerge Semin
Indeed we can freely move the dmaengine_submit() method invocation and the Tx and Rx busy flag setting into the DMA Tx/Rx prepare methods. Since the Tx/Rx preparation method is now mainly used for the DMA transfers submission, here we suggest to rename it to have the _submit_{r,t}x suffix instead. By having this alteration applied first we implement another code preparation before adding the one-by-one DMA SG entries transmission, second we now have the dma_async_tx_descriptor descriptor used locally only in the new DMA transfers submission methods (this will be cleaned up a bit later), third we make the generic transfer method more readable, where now the functionality of submission, execution and wait procedures is transparently split up instead of having a preparation, intermixed submission/execution and wait procedures. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-6-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Check rx_buf availability in the xfer methodSerge Semin
Checking rx_buf for being NULL and returning NULL from the Rx-channel preparation method doesn't let us to distinguish that situation from errors happening during the Rx SG-list preparation. So it's better to make sure that the rx_buf not-NULL and full-duplex communication is requested prior calling the Rx preparation method. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-5-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Configure the DMA channels in dma_setupSerge Semin
Mainly this is a preparation patch before adding one-by-one DMA SG entries transmission. But logically the Tx and Rx DMA channels setup should be performed in the dma_setup() callback anyway. So we'll move the DMA slave channels src/dst burst lengths, address and address width configuration from the Tx/Rx channels preparation methods to the dedicated functions and then make sure it's called at the DMA setup stage. Note we now make sure the return value of the dmaengine_slave_config() method doesn't indicate an error. It has been unnecessary in case if Dw DMAC is utilized as a DMA engine, since its device_config() callback always returns zero (though it might change in future). But since DW APB SSI driver now supports any DMA back-end we must make sure the DMA device configuration has been successful before proceeding with further setups. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-4-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specifiedSerge Semin
Since commit 46164fde6b78 ("spi: dw: Fix Rx-only DMA transfers") if DMA interface is enabled, then Tx-buffer must be available in each SPI transfer. It's required since in order to activate the incoming data reception either DMA or CPU must be pushing data out to the SPI bus. But the DW APB SSI DMA driver code is still left in state as if Tx-buffer might be optional, which is no longer true. Let's fix it so an error would be returned if no Tx-buffer detected and DMA Tx would be always enabled. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-3-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Set DMA Level registers on initSerge Semin
Indeed the registers content doesn't get cleared when the SPI controller is disabled and enabled. Max burst lengths aren't changed since the Rx and Tx DMA channels are requested on init stage and are kept acquired until the device is removed. Obviously SPI controller FIFO depth can't be changed. Due to all of that we can safely move the DMA Transmit and Receive data level registers initialization to the SPI controller DMA init stage (when the SPI controller is being probed) instead of doing it for each SPI transfer when dma_setup is called. This shall speed the DMA-based SPI transfer initialization up a bit, particularly if the APB bus is relatively slow. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-2-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29iommu/qcom: add missing put_device() call in qcom_iommu_of_xlate()Yu Kuai
if of_find_device_by_node() succeed, qcom_iommu_of_xlate() doesn't have a corresponding put_device(). Thus add put_device() to fix the exception handling for this function implementation. Fixes: 0ae349a0f33f ("iommu/qcom: Add qcom_iommu") Acked-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Yu Kuai <yukuai3@huawei.com> Link: https://lore.kernel.org/r/20200929014037.2436663-1-yukuai3@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29Merge tag 'phy-fixes-2-5.9' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy into usb-linus Vinod writes: phy: Second round of fixes for 5.9 *) Fix of leak in TI phy driver * tag 'phy-fixes-2-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy: phy: ti: am654: Fix a leak in serdes_am654_probe()
2020-09-29arm64: Add support for PR_SPEC_DISABLE_NOEXEC prctl() optionWill Deacon
The PR_SPEC_DISABLE_NOEXEC option to the PR_SPEC_STORE_BYPASS prctl() allows the SSB mitigation to be enabled only until the next execve(), at which point the state will revert back to PR_SPEC_ENABLE and the mitigation will be disabled. Add support for PR_SPEC_DISABLE_NOEXEC on arm64. Reported-by: Anthony Steinhauser <asteinhauser@google.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Pull in task_stack_page() to Spectre-v4 mitigation codeWill Deacon
The kbuild robot reports that we're relying on an implicit inclusion to get a definition of task_stack_page() in the Spectre-v4 mitigation code, which is not always in place for some configurations: | arch/arm64/kernel/proton-pack.c:329:2: error: implicit declaration of function 'task_stack_page' [-Werror,-Wimplicit-function-declaration] | task_pt_regs(task)->pstate |= val; | ^ | arch/arm64/include/asm/processor.h:268:36: note: expanded from macro 'task_pt_regs' | ((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1) | ^ | arch/arm64/kernel/proton-pack.c:329:2: note: did you mean 'task_spread_page'? Add the missing include to fix the build error. Fixes: a44acf477220 ("arm64: Move SSBD prctl() handler alongside other spectre mitigation code") Reported-by: Anthony Steinhauser <asteinhauser@google.com> Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/202009260013.Ul7AD29w%lkp@intel.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Allow patching EL2 vectors even with KASLR is not enabledWill Deacon
Patching the EL2 exception vectors is integral to the Spectre-v2 workaround, where it can be necessary to execute CPU-specific sequences to nobble the branch predictor before running the hypervisor text proper. Remove the dependency on CONFIG_RANDOMIZE_BASE and allow the EL2 vectors to be patched even when KASLR is not enabled. Fixes: 7a132017e7a5 ("KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASE") Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/202009221053.Jv1XsQUZ%lkp@intel.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Get rid of arm64_ssbd_stateMarc Zyngier
Out with the old ghost, in with the new... Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Convert ARCH_WORKAROUND_2 to arm64_get_spectre_v4_state()Marc Zyngier
Convert the KVM WA2 code to using the Spectre infrastructure, making the code much more readable. It also allows us to take SSBS into account for the mitigation. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Get rid of kvm_arm_have_ssbd()Marc Zyngier
kvm_arm_have_ssbd() is now completely unused, get rid of it. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Simplify handling of ARCH_WORKAROUND_2Marc Zyngier
Owing to the fact that the host kernel is always mitigated, we can drastically simplify the WA2 handling by keeping the mitigation state ON when entering the guest. This means the guest is either unaffected or not mitigated. This results in a nice simplification of the mitigation space, and the removal of a lot of code that was never really used anyway. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rewrite Spectre-v4 mitigation codeWill Deacon
Rewrite the Spectre-v4 mitigation handling code to follow the same approach as that taken by Spectre-v2. For now, report to KVM that the system is vulnerable (by forcing 'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in subsequent steps. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Move SSBD prctl() handler alongside other spectre mitigation codeWill Deacon
As part of the spectre consolidation effort to shift all of the ghosts into their own proton pack, move all of the horrible SSBD prctl() code out of its own 'ssbd.c' file. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rename ARM64_SSBD to ARM64_SPECTRE_V4Will Deacon
In a similar manner to the renaming of ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2, rename ARM64_SSBD to ARM64_SPECTRE_V4. This isn't _entirely_ accurate, as we also need to take into account the interaction with SSBS, but that will be taken care of in subsequent patches. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Treat SSBS as a non-strict system featureWill Deacon
If all CPUs discovered during boot have SSBS, then spectre-v4 will be considered to be "mitigated". However, we still allow late CPUs without SSBS to be onlined, albeit with a "SANITY CHECK" warning. This is problematic for userspace because it means that the system can quietly transition to "Vulnerable" at runtime. Avoid this by treating SSBS as a non-strict system feature: if all of the CPUs discovered during boot have SSBS, then late arriving secondaries better have it as well. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Group start_thread() functions togetherWill Deacon
The is_ttbrX_addr() functions have somehow ended up in the middle of the start_thread() functions, so move them out of the way to keep the code readable. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Set CSV2 for guests on hardware unaffected by Spectre-v2Marc Zyngier
If the system is not affected by Spectre-v2, then advertise to the KVM guest that it is not affected, without the need for a safelist in the guest. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rewrite Spectre-v2 mitigation codeWill Deacon
The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain. This is largely due to it being written hastily, without much clue as to how things would pan out, and also because it ends up mixing policy and state in such a way that it is very difficult to figure out what's going on. Rewrite the Spectre-v2 mitigation so that it clearly separates state from policy and follows a more structured approach to handling the mitigation. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Introduce separate file for spectre mitigations and reportingWill Deacon
The spectre mitigation code is spread over a few different files, which makes it both hard to follow, but also hard to remove it should we want to do that in future. Introduce a new file for housing the spectre mitigations, and populate it with the spectre-v1 reporting code to start with. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2Will Deacon
For better or worse, the world knows about "Spectre" and not about "Branch predictor hardening". Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2 as part of moving all of the Spectre mitigations into their own little corner. Signed-off-by: Will Deacon <will@kernel.org>