summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-09-29spi: dw: Initialize n_bytes before the memory barrierSerge Semin
Since n_bytes field of the DW SPI private data is also utilized by the IRQ handler, we need to make sure it' initialization is done before the memory barrier. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-4-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw: Discard IRQ threshold macroSerge Semin
The macro has been unused since a half of FIFO length was defined to be a marker of the IRQ. Let's remove it definition. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112914.26501-2-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Add one-by-one SG list entries transferSerge Semin
In case if at least one of the requested DMA engine channels doesn't support the hardware accelerated SG list entries traverse, the DMA driver will most likely work that around by performing the IRQ-based SG list entries resubmission. That might and will cause a problem if the DMA Tx channel is recharged and re-executed before the Rx DMA channel. Due to non-deterministic IRQ-handler execution latency the DMA Tx channel will start pushing data to the SPI bus before the Rx DMA channel is even reinitialized with the next inbound SG list entry. By doing so the DMA Tx channel will implicitly start filling the DW APB SSI Rx FIFO up, which while the DMA Rx channel being recharged and re-executed will eventually be overflown. In order to solve the problem we have to feed the DMA engine with SG list entries one-by-one. It shall keep the DW APB SSI Tx and Rx FIFOs synchronized and prevent the Rx FIFO overflow. Since in general the SPI tx_sg and rx_sg lists may have different number of entries of different lengths (though total length should match) we virtually split the SG-lists to the set of DMA transfers, which length is a minimum of the ordered SG-entries lengths. The solution described above is only executed if a full-duplex SPI transfer is requested and the DMA engine hasn't provided channels with hardware accelerated SG list traverse capability to handle both SG lists at once. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200920112322.24585-12-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Pass exact data to the DMA submit and wait methodsSerge Semin
In order to use the DMA submission and waiting methods in both generic DMA-based SPI transfer and one-by-one DMA SG entries transmission functions, we need to modify the dw_spi_dma_wait() and dw_spi_dma_submit_tx()/dw_spi_dma_submit_rx() prototypes. So instead of getting the SPI transfer object as the second argument they must accept the exact data structure instances they imply to use. Those are the current transfer length and the SPI bus frequency in case of dw_spi_dma_wait(), and SG list together with number of list entries in case of the DMA Tx/Rx submission methods. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-11-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Move DMAC register cleanup to DMA transfer methodSerge Semin
DW APB SSI DMA driver doesn't use the native SPI core wait API since commit bdbdf0f06337 ("spi: dw: Locally wait for the DMA transfers completion"). Due to that the driver can now clear the DMAC register in a single place synchronously with the DMA transactions completion or failure. After that all the possible code paths are still covered: 1) DMA completion callbacks are executed in case if the corresponding DMA transactions are finished. When they are, one of them will eventually wake the SPI messages pump kernel thread and dw_spi_dma_transfer_all() method will clean the DMAC register as implied by this patch. 2) dma_stop is called when the SPI core detects an error either returned from the transfer_one() callback or set in the SPI message status field. Both types of errors will be noticed by the dw_spi_dma_transfer_all() method. 3) dma_exit is called when either SPI controller driver or the corresponding device is removed. In any case the SPI core will first flush the SPI messages pump kernel thread, so any pending or in-fly SPI transfers will be finished before that. Due to all of that let's simplify the DW APB SSI DMA driver a bit and move the DMAC register cleanup to a single place in the dw_spi_dma_transfer_all() method. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-10-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Detach DMA transfer into a dedicated methodSerge Semin
In order to add an alternative method of DMA-based SPI transfer first we need to detach the currently available one from the common code. Here we move the normal DMA-based SPI transfer execution functionality into a dedicated method. It will be utilized if either the DMA engine supports an unlimited number SG entries or Tx-only SPI transfer is requested. But currently just use it for any SPI transfer. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-9-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Remove DMA Tx-desc passing aroundSerge Semin
It's pointless to pass the Rx and Tx transfers DMA Tx-descriptors, since they are used in the Tx/Rx submit method only. Instead just return the submission status from these methods. This alteration will make the code less complex. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-8-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Check DMA Tx-desc submission statusSerge Semin
We suggest to add the dmaengine_submit() return value test for errors. It has been unnecessary while the driver was expected to be utilized in pair with DW DMAC. But since now the driver can be used with any DMA engine, it might be useful to track the errors on DMA submissions so not miss them and get into an unpredictable driver behaviour. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-7-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Move DMA transfers submission to the channels prep methodsSerge Semin
Indeed we can freely move the dmaengine_submit() method invocation and the Tx and Rx busy flag setting into the DMA Tx/Rx prepare methods. Since the Tx/Rx preparation method is now mainly used for the DMA transfers submission, here we suggest to rename it to have the _submit_{r,t}x suffix instead. By having this alteration applied first we implement another code preparation before adding the one-by-one DMA SG entries transmission, second we now have the dma_async_tx_descriptor descriptor used locally only in the new DMA transfers submission methods (this will be cleaned up a bit later), third we make the generic transfer method more readable, where now the functionality of submission, execution and wait procedures is transparently split up instead of having a preparation, intermixed submission/execution and wait procedures. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-6-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Check rx_buf availability in the xfer methodSerge Semin
Checking rx_buf for being NULL and returning NULL from the Rx-channel preparation method doesn't let us to distinguish that situation from errors happening during the Rx SG-list preparation. So it's better to make sure that the rx_buf not-NULL and full-duplex communication is requested prior calling the Rx preparation method. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-5-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Configure the DMA channels in dma_setupSerge Semin
Mainly this is a preparation patch before adding one-by-one DMA SG entries transmission. But logically the Tx and Rx DMA channels setup should be performed in the dma_setup() callback anyway. So we'll move the DMA slave channels src/dst burst lengths, address and address width configuration from the Tx/Rx channels preparation methods to the dedicated functions and then make sure it's called at the DMA setup stage. Note we now make sure the return value of the dmaengine_slave_config() method doesn't indicate an error. It has been unnecessary in case if Dw DMAC is utilized as a DMA engine, since its device_config() callback always returns zero (though it might change in future). But since DW APB SSI driver now supports any DMA back-end we must make sure the DMA device configuration has been successful before proceeding with further setups. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-4-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specifiedSerge Semin
Since commit 46164fde6b78 ("spi: dw: Fix Rx-only DMA transfers") if DMA interface is enabled, then Tx-buffer must be available in each SPI transfer. It's required since in order to activate the incoming data reception either DMA or CPU must be pushing data out to the SPI bus. But the DW APB SSI DMA driver code is still left in state as if Tx-buffer might be optional, which is no longer true. Let's fix it so an error would be returned if no Tx-buffer detected and DMA Tx would be always enabled. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-3-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29spi: dw-dma: Set DMA Level registers on initSerge Semin
Indeed the registers content doesn't get cleared when the SPI controller is disabled and enabled. Max burst lengths aren't changed since the Rx and Tx DMA channels are requested on init stage and are kept acquired until the device is removed. Obviously SPI controller FIFO depth can't be changed. Due to all of that we can safely move the DMA Transmit and Receive data level registers initialization to the SPI controller DMA init stage (when the SPI controller is being probed) instead of doing it for each SPI transfer when dma_setup is called. This shall speed the DMA-based SPI transfer initialization up a bit, particularly if the APB bus is relatively slow. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Link: https://lore.kernel.org/r/20200920112322.24585-2-Sergey.Semin@baikalelectronics.ru Signed-off-by: Mark Brown <broonie@kernel.org>
2020-09-29Merge tag 'phy-fixes-2-5.9' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy into usb-linus Vinod writes: phy: Second round of fixes for 5.9 *) Fix of leak in TI phy driver * tag 'phy-fixes-2-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy: phy: ti: am654: Fix a leak in serdes_am654_probe()
2020-09-29arm64: Add support for PR_SPEC_DISABLE_NOEXEC prctl() optionWill Deacon
The PR_SPEC_DISABLE_NOEXEC option to the PR_SPEC_STORE_BYPASS prctl() allows the SSB mitigation to be enabled only until the next execve(), at which point the state will revert back to PR_SPEC_ENABLE and the mitigation will be disabled. Add support for PR_SPEC_DISABLE_NOEXEC on arm64. Reported-by: Anthony Steinhauser <asteinhauser@google.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Pull in task_stack_page() to Spectre-v4 mitigation codeWill Deacon
The kbuild robot reports that we're relying on an implicit inclusion to get a definition of task_stack_page() in the Spectre-v4 mitigation code, which is not always in place for some configurations: | arch/arm64/kernel/proton-pack.c:329:2: error: implicit declaration of function 'task_stack_page' [-Werror,-Wimplicit-function-declaration] | task_pt_regs(task)->pstate |= val; | ^ | arch/arm64/include/asm/processor.h:268:36: note: expanded from macro 'task_pt_regs' | ((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1) | ^ | arch/arm64/kernel/proton-pack.c:329:2: note: did you mean 'task_spread_page'? Add the missing include to fix the build error. Fixes: a44acf477220 ("arm64: Move SSBD prctl() handler alongside other spectre mitigation code") Reported-by: Anthony Steinhauser <asteinhauser@google.com> Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/202009260013.Ul7AD29w%lkp@intel.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Allow patching EL2 vectors even with KASLR is not enabledWill Deacon
Patching the EL2 exception vectors is integral to the Spectre-v2 workaround, where it can be necessary to execute CPU-specific sequences to nobble the branch predictor before running the hypervisor text proper. Remove the dependency on CONFIG_RANDOMIZE_BASE and allow the EL2 vectors to be patched even when KASLR is not enabled. Fixes: 7a132017e7a5 ("KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASE") Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/202009221053.Jv1XsQUZ%lkp@intel.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Get rid of arm64_ssbd_stateMarc Zyngier
Out with the old ghost, in with the new... Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Convert ARCH_WORKAROUND_2 to arm64_get_spectre_v4_state()Marc Zyngier
Convert the KVM WA2 code to using the Spectre infrastructure, making the code much more readable. It also allows us to take SSBS into account for the mitigation. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Get rid of kvm_arm_have_ssbd()Marc Zyngier
kvm_arm_have_ssbd() is now completely unused, get rid of it. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Simplify handling of ARCH_WORKAROUND_2Marc Zyngier
Owing to the fact that the host kernel is always mitigated, we can drastically simplify the WA2 handling by keeping the mitigation state ON when entering the guest. This means the guest is either unaffected or not mitigated. This results in a nice simplification of the mitigation space, and the removal of a lot of code that was never really used anyway. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rewrite Spectre-v4 mitigation codeWill Deacon
Rewrite the Spectre-v4 mitigation handling code to follow the same approach as that taken by Spectre-v2. For now, report to KVM that the system is vulnerable (by forcing 'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in subsequent steps. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Move SSBD prctl() handler alongside other spectre mitigation codeWill Deacon
As part of the spectre consolidation effort to shift all of the ghosts into their own proton pack, move all of the horrible SSBD prctl() code out of its own 'ssbd.c' file. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rename ARM64_SSBD to ARM64_SPECTRE_V4Will Deacon
In a similar manner to the renaming of ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2, rename ARM64_SSBD to ARM64_SPECTRE_V4. This isn't _entirely_ accurate, as we also need to take into account the interaction with SSBS, but that will be taken care of in subsequent patches. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Treat SSBS as a non-strict system featureWill Deacon
If all CPUs discovered during boot have SSBS, then spectre-v4 will be considered to be "mitigated". However, we still allow late CPUs without SSBS to be onlined, albeit with a "SANITY CHECK" warning. This is problematic for userspace because it means that the system can quietly transition to "Vulnerable" at runtime. Avoid this by treating SSBS as a non-strict system feature: if all of the CPUs discovered during boot have SSBS, then late arriving secondaries better have it as well. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Group start_thread() functions togetherWill Deacon
The is_ttbrX_addr() functions have somehow ended up in the middle of the start_thread() functions, so move them out of the way to keep the code readable. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Set CSV2 for guests on hardware unaffected by Spectre-v2Marc Zyngier
If the system is not affected by Spectre-v2, then advertise to the KVM guest that it is not affected, without the need for a safelist in the guest. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rewrite Spectre-v2 mitigation codeWill Deacon
The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain. This is largely due to it being written hastily, without much clue as to how things would pan out, and also because it ends up mixing policy and state in such a way that it is very difficult to figure out what's going on. Rewrite the Spectre-v2 mitigation so that it clearly separates state from policy and follows a more structured approach to handling the mitigation. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Introduce separate file for spectre mitigations and reportingWill Deacon
The spectre mitigation code is spread over a few different files, which makes it both hard to follow, but also hard to remove it should we want to do that in future. Introduce a new file for housing the spectre mitigations, and populate it with the spectre-v1 reporting code to start with. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2Will Deacon
For better or worse, the world knows about "Spectre" and not about "Branch predictor hardening". Rename ARM64_HARDEN_BRANCH_PREDICTOR to ARM64_SPECTRE_V2 as part of moving all of the Spectre mitigations into their own little corner. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Simplify install_bp_hardening_cb()Will Deacon
Use is_hyp_mode_available() to detect whether or not we need to patch the KVM vectors for branch hardening, which avoids the need to take the vector pointers as parameters. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29KVM: arm64: Replace CONFIG_KVM_INDIRECT_VECTORS with CONFIG_RANDOMIZE_BASEWill Deacon
The removal of CONFIG_HARDEN_BRANCH_PREDICTOR means that CONFIG_KVM_INDIRECT_VECTORS is synonymous with CONFIG_RANDOMIZE_BASE, so replace it. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Remove Spectre-related CONFIG_* optionsWill Deacon
The spectre mitigations are too configurable for their own good, leading to confusing logic trying to figure out when we should mitigate and when we shouldn't. Although the plethora of command-line options need to stick around for backwards compatibility, the default-on CONFIG options that depend on EXPERT can be dropped, as the mitigations only do anything if the system is vulnerable, a mitigation is available and the command-line hasn't disabled it. Remove CONFIG_HARDEN_BRANCH_PREDICTOR and CONFIG_ARM64_SSBD in favour of enabling this code unconditionally. Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29arm64: Run ARCH_WORKAROUND_2 enabling code on all CPUsMarc Zyngier
Commit 606f8e7b27bf ("arm64: capabilities: Use linear array for detection and verification") changed the way we deal with per-CPU errata by only calling the .matches() callback until one CPU is found to be affected. At this point, .matches() stop being called, and .cpu_enable() will be called on all CPUs. This breaks the ARCH_WORKAROUND_2 handling, as only a single CPU will be mitigated. In order to address this, forcefully call the .matches() callback from a .cpu_enable() callback, which brings us back to the original behaviour. Fixes: 606f8e7b27bf ("arm64: capabilities: Use linear array for detection and verification") Cc: <stable@vger.kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-29bpf, powerpc: Fix misuse of fallthrough in bpf_jit_comp()He Zhe
The user defined label following "fallthrough" is not considered by GCC and causes build failure. kernel-source/include/linux/compiler_attributes.h:208:41: error: attribute 'fallthrough' not preceding a case label or default label [-Werror] 208 define fallthrough _attribute((fallthrough_)) ^~~~~~~~~~~~~ Fixes: df561f6688fe ("treewide: Use fallthrough pseudo-keyword") Signed-off-by: He Zhe <zhe.he@windriver.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: Gustavo A. R. Silva <gustavoars@kernel.org> Link: https://lore.kernel.org/bpf/20200928090023.38117-1-zhe.he@windriver.com
2020-09-29null_blk: add support for max open/active zone limit for zoned devicesNiklas Cassel
Add support for user space to set a max open zone and a max active zone limit via configfs. By default, the default values are 0 == no limit. Call the block layer API functions used for exposing the configured limits to sysfs. Add accounting in null_blk_zoned so that these new limits are respected. Performing an operation that would exceed these limits results in a standard I/O error. A max open zone limit exists in the ZBC standard. While null_blk_zoned is used to test the Zoned Block Device model in Linux, when it comes to differences between ZBC and ZNS, null_blk_zoned mostly follows ZBC. Therefore, implement the manage open zone resources function from ZBC, but additionally add support for max active zones. This enables user space not only to test against a device with an open zone limit, but also to test against a device with an active zone limit. Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-09-29block-mq: fix comments in blk_mq_queue_tag_busy_iteryangerkun
'f5bbbbe4d635 ("blk-mq: sync the update nr_hw_queues with blk_mq_queue_tag_busy_iter")' introduce a bug what we may sleep between rcu lock. Then '530ca2c9bd69 ("blk-mq: Allow blocking queue tag iter callbacks")' fix it by get request_queue's ref. And 'a9a808084d6a ("block: Remove the synchronize_rcu() call from __blk_mq_update_nr_hw_queues()")' remove the synchronize_rcu in __blk_mq_update_nr_hw_queues. We need update the confused comments in blk_mq_queue_tag_busy_iter. Signed-off-by: yangerkun <yangerkun@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-09-29blk-mq: call commit_rqs while list empty but error happenyangerkun
Blk-mq should call commit_rqs once 'bd.last != true' and no more request will come(so virtscsi can kick the virtqueue, e.g.). We already do that in 'blk_mq_dispatch_rq_list/blk_mq_try_issue_list_directly' while list not empty and 'queued > 0'. However, we can seen the same scene once the last request in list call queue_rq and return error like BLK_STS_IOERR which will not requeue the request, and lead that list empty but need call commit_rqs too(Or the request for virtscsi will stay timeout until other request kick virtqueue). We found this problem by do fsstress test with offline/online virtscsi device repeat quickly. Fixes: d666ba98f849 ("blk-mq: add mq_ops->commit_rqs()") Reported-by: zhangyi (F) <yi.zhang@huawei.com> Signed-off-by: yangerkun <yangerkun@huawei.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-09-29io_uring: fix async buffered reads when readahead is disabledHao Xu
The async buffered reads feature is not working when readahead is turned off. There are two things to concern: - when doing retry in io_read, not only the IOCB_WAITQ flag but also the IOCB_NOWAIT flag is still set, which makes it goes to would_block phase in generic_file_buffered_read() and then return -EAGAIN. After that, the io-wq thread work is queued, and later doing the async reads in the old way. - even if we remove IOCB_NOWAIT when doing retry, the feature is still not running properly, since in generic_file_buffered_read() it goes to lock_page_killable() after calling mapping->a_ops->readpage() to do IO, and thus causing process to sleep. Fixes: 1a0a7853b901 ("mm: support async buffered reads in generic_file_buffered_read()") Fixes: 3b2a4439e0ae ("io_uring: get rid of kiocb_wait_page_queue_init()") Signed-off-by: Hao Xu <haoxu@linux.alibaba.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-09-29efi/arm64: libstub: Deal gracefully with EFI_RNG_PROTOCOL failureArd Biesheuvel
Currently, on arm64, we abort on any failure from efi_get_random_bytes() other than EFI_NOT_FOUND when it comes to setting the physical seed for KASLR, but ignore such failures when obtaining the seed for virtual KASLR or for early seeding of the kernel's entropy pool via the config table. This is inconsistent, and may lead to unexpected boot failures. So let's permit any failure for the physical seed, and simply report the error code if it does not equal EFI_NOT_FOUND. Cc: <stable@vger.kernel.org> # v5.8+ Reported-by: Heinrich Schuchardt <xypron.glpk@gmx.de> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-09-29gpio: mxc: Support module buildAnson Huang
Change config to tristate, add module device table, module author, description and license to support module build for i.MX GPIO driver. As this is a SoC GPIO module, it provides common functions for most of the peripheral devices, such as GPIO pins control, secondary interrupt controller for GPIO pins IRQ etc., without GPIO driver, most of the peripheral devices will NOT work properly, so GPIO module is similar with clock, pinctrl driver that should be loaded ONCE and never unloaded. Since MXC GPIO driver needs to have init function to register syscore ops once, here still use subsys_initcall(), NOT module_platform_driver(). Signed-off-by: Anson Huang <Anson.Huang@nxp.com> Link: https://lore.kernel.org/r/1600320829-1453-1-git-send-email-Anson.Huang@nxp.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-09-29pinctrl: bcm: fix kconfig dependency warning when !GPIOLIBNecip Fazil Yildiran
When PINCTRL_BCM2835 is enabled and GPIOLIB is disabled, it results in the following Kbuild warning: WARNING: unmet direct dependencies detected for GPIOLIB_IRQCHIP Depends on [n]: GPIOLIB [=n] Selected by [y]: - PINCTRL_BCM2835 [=y] && PINCTRL [=y] && OF [=y] && (ARCH_BCM2835 [=n] || ARCH_BRCMSTB [=n] || COMPILE_TEST [=y]) The reason is that PINCTRL_BCM2835 selects GPIOLIB_IRQCHIP without depending on or selecting GPIOLIB while GPIOLIB_IRQCHIP is subordinate to GPIOLIB. Honor the kconfig menu hierarchy to remove kconfig dependency warnings. Fixes: 85ae9e512f43 ("pinctrl: bcm2835: switch to GPIOLIB_IRQCHIP") Signed-off-by: Necip Fazil Yildiran <fazilyildiran@gmail.com> Link: https://lore.kernel.org/r/20200914144025.371370-1-fazilyildiran@gmail.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-09-29dt-bindings: gpio: convert bindings for Maxim MAX732x family to dtschemaKrzysztof Kozlowski
Convert the Maxim MAX732x family of GPIO expanders bindings to device tree schema by merging it with existing PCA95xx schema. These are quite similar so merging reduces duplication. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200916155715.21009-3-krzk@kernel.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-09-29dt-bindings: gpio: convert bindings for NXP PCA953x family to dtschemaKrzysztof Kozlowski
Convert the NXP PCA953x family of GPIO expanders bindings to device tree schema. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200916155715.21009-2-krzk@kernel.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-09-29dt-bindings: gpio: fsl-imx-gpio: add gpio-line-namesKrzysztof Kozlowski
Describe common "gpio-line-names" property to fix dtbs_check warnings like: arch/arm/boot/dts/imx53-m53menlo.dt.yaml: gpio@53f84000: 'gpio-line-names' does not match any of the regexes: '^(hog-[0-9]+|.+-hog(-[0-9]+)?)$', 'pinctrl-[0-9]+' Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200920195848.27075-3-krzk@kernel.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-09-29dt-bindings: gpio: fsl-imx-gpio: add i.MX ARMv6 and ARMv7 compatiblesKrzysztof Kozlowski
Several DTSes with ARMv6 and ARMv7 i.MX SoCs introduce their own compatibles so add them to fix dtbs_check warnings like: arch/arm/boot/dts/imx35-pdk.dt.yaml: gpio@53fa4000: compatible: ['fsl,imx35-gpio', 'fsl,imx31-gpio'] is not valid under any of the given schemas arch/arm/boot/dts/imx51-babbage.dt.yaml: gpio@73f90000: compatible: ['fsl,imx51-gpio', 'fsl,imx35-gpio'] is not valid under any of the given schemas Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200920195848.27075-2-krzk@kernel.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-09-29dt-bindings: gpio: pl061: add gpio-line-namesKrzysztof Kozlowski
Describe common "gpio-line-names" property to fix dtbs_check warnings like: arch/arm64/boot/dts/hisilicon/hi3670-hikey970.dt.yaml: gpio@e8a0b000: 'gpio-line-names' does not match any of the regexes: 'pinctrl-[0-9]+' Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200920195848.27075-1-krzk@kernel.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-09-29Merge tag 'gpio-fixes-for-v5.9-rc7' of ↵Linus Walleij
git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux into fixes gpio: fixes for v5.9-rc7 - fix uninitialized variable in gpio-pca953x - enable all 160 lines and fix interrupt configuration in gpio-aspeed-gpio - fix ast2600 bank properties in gpio-aspeed
2020-09-29PM / devfreq: tegra30: Improve initial hardware resettingDmitry Osipenko
It's safe to enable the ACTMON clock at any time during driver probing, even if we don't know the state of hardware, because it's used only for collecting and processing stats, and interrupt is kept disabled. This allows us to slightly improve code which performs initial hardware resetting by making use of a single reset_control_reset() instead of assert/deassert pair. Secondly, a potential error of the reset-control API is handled nicely now. Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
2020-09-29PM / devfreq: event: Change prototype of devfreq_event_get_edev_by_phandle ↵Chanwoo Choi
function Previously, devfreq core support 'devfreq-events' property in order to get the devfreq-event device by phandle. But, 'devfreq-events' property name is not proper on devicetree binding because this name doesn't mean the any h/w attribute. The devfreq-event core hand over the rights to decide the property name for getting the devfreq-event device on devicetree. Each devfreq-event driver will decide the property name on devicetree binding and then pass the their own property name to devfreq_event_get_edev_by_phandle function. And change the prototype of devfreq_event_get_edev_count function because of used deprecated 'devfreq-events' property. Acked-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>