summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-01-21Merge tag 'rds-odp-for-5.5' of ↵David S. Miller
https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma Leon Romanovsky says: ==================== Use ODP MRs for kernel ULPs The following series extends MR creation routines to allow creation of user MRs through kernel ULPs as a proxy. The immediate use case is to allow RDS to work over FS-DAX, which requires ODP (on-demand-paging) MRs to be created and such MRs were not possible to create prior this series. The first part of this patchset extends RDMA to have special verb ib_reg_user_mr(). The common use case that uses this function is a userspace application that allocates memory for HCA access but the responsibility to register the memory at the HCA is on an kernel ULP. This ULP acts as an agent for the userspace application. The second part provides advise MR functionality for ULPs. This is integral part of ODP flows and used to trigger pagefaults in advance to prepare memory before running working set. The third part is actual user of those in-kernel APIs. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-01-21dt-bindings: fsl-imx-sdma: Add i.MX8MM/i.MX8MN/i.MX8MP compatible stringAnson Huang
Add imx8mm/imx8mn/imx8mp sdma support. Signed-off-by: Anson Huang <Anson.Huang@nxp.com> Acked-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/1578893602-14395-1-git-send-email-Anson.Huang@nxp.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21dmaengine: zynqmp_dma: fix burst length configurationMatthias Fend
Since the dma engine expects the burst length register content as power of 2 value, the burst length needs to be converted first. Additionally add a burst length range check to avoid corrupting unrelated register bits. Signed-off-by: Matthias Fend <matthias.fend@wolfvision.net> Link: https://lore.kernel.org/r/20200115102249.24398-1-matthias.fend@wolfvision.net Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21ipv6: sr: remove SKB_GSO_IPXIP6 on End.D* actionsYuki Taguchi
After LRO/GRO is applied, SRv6 encapsulated packets have SKB_GSO_IPXIP6 feature flag, and this flag must be removed right after decapulation procedure. Currently, SKB_GSO_IPXIP6 flag is not removed on End.D* actions, which creates inconsistent packet state, that is, a normal TCP/IP packets have the SKB_GSO_IPXIP6 flag. This behavior can cause unexpected fallback to GSO on routing to netdevices that do not support SKB_GSO_IPXIP6. For example, on inter-VRF forwarding, decapsulated packets separated into small packets by GSO because VRF devices do not support TSO for packets with SKB_GSO_IPXIP6 flag, and this degrades forwarding performance. This patch removes encapsulation related GSO flags from the skb right after the End.D* action is applied. Fixes: d7a669dd2f8b ("ipv6: sr: add helper functions for seg6local") Signed-off-by: Yuki Taguchi <tagyounit@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-01-21arm64: Kconfig: select HAVE_FUTEX_CMPXCHGVladimir Murzin
arm64 provides always working implementation of futex_atomic_cmpxchg_inatomic(), so there is no need to check it runtime. Reported-by: Piyush swami <Piyush.swami@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-21erofs: clean up z_erofs_submit_queue()Gao Xiang
A label and extra variables will be eliminated, which is more cleaner. Link: https://lore.kernel.org/r/20200121064819.139469-1-gaoxiang25@huawei.com Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
2020-01-21erofs: fold in postsubmit_is_all_bypassed()Gao Xiang
No need to introduce such separated helper since cache strategy compile configs were changed into runtime options instead in v5.4. No logic changes. Link: https://lore.kernel.org/r/20200121064747.138987-1-gaoxiang25@huawei.com Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
2020-01-21dmaengine: sun4i: Add support for cyclic requests with dedicated DMAStefan Mavrodiev
Currently the cyclic transfers can be used only with normal DMAs. They can be used by pcm_dmaengine module, which is required for implementing sound with sun4i-hdmi encoder. This is so because the controller can accept audio only from a dedicated DMA. This patch enables them, following the existing style for the scatter/gather type transfers. Signed-off-by: Stefan Mavrodiev <stefan@olimex.com> Acked-by: Maxime Ripard <mripard@kernel.org> Link: https://lore.kernel.org/r/20200110141140.28527-2-stefan@olimex.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21Merge branch 'master' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec Steffen Klassert says: ==================== pull request (net): ipsec 2020-01-21 1) Fix packet tx through bpf_redirect() for xfrm and vti interfaces. From Nicolas Dichtel. 2) Do not confirm neighbor when do pmtu update on a virtual xfrm interface. From Xu Wang. 3) Support output_mark for offload ESP packets, this was forgotten when the output_mark was added initially. From Ulrich Weber. Please pull or let me know if there are problems. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-01-21dmaengine: fsl-qdma: fix duplicated argument to &&Chen Zhou
There is duplicated argument to && in function fsl_qdma_free_chan_resources, which looks like a typo, pointer fsl_queue->desc_pool also needs NULL check, fix it. Detected with coccinelle. Fixes: b092529e0aa0 ("dmaengine: fsl-qdma: Add qDMA controller driver for Layerscape SoCs") Signed-off-by: Chen Zhou <chenzhou10@huawei.com> Reviewed-by: Peng Ma <peng.ma@nxp.com> Tested-by: Peng Ma <peng.ma@nxp.com> Link: https://lore.kernel.org/r/20200120125843.34398-1-chenzhou10@huawei.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21dmaengine: ti: k3-psil: make symbols staticPeter Ujfalusi
Fixe the following warnings by making these static drivers/dma/ti/k3-psil-j721e.c:62:16: warning: symbol 'j721e_src_ep_map' was not declared. Should it be static? drivers/dma/ti/k3-psil-j721e.c:172:16: warning: symbol 'j721e_dst_ep_map' was not declared. Should it be static? drivers/dma/ti/k3-psil-j721e.c:216:20: warning: symbol 'j721e_ep_map' was not declared. Should it be static? CC drivers/dma/ti/k3-psil-j721e.o drivers/dma/ti/k3-psil-am654.c:52:16: warning: symbol 'am654_src_ep_map' was not declared. Should it be static? drivers/dma/ti/k3-psil-am654.c:127:16: warning: symbol 'am654_dst_ep_map' was not declared. Should it be static? drivers/dma/ti/k3-psil-am654.c:169:20: warning: symbol 'am654_ep_map' was not declared. Should it be static? Reported-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Link: https://lore.kernel.org/r/20200121070104.4393-1-peter.ujfalusi@ti.com [vkoul: updated patch title] Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21drm/i915: Align engine->uabi_class/instance with i915_drm.hTvrtko Ursulin
In our ABI we have defined I915_ENGINE_CLASS_INVALID_NONE and I915_ENGINE_CLASS_INVALID_VIRTUAL as negative values which creates implicit coupling with type widths used in, also ABI, struct i915_engine_class_instance. One place where we export engine->uabi_class I915_ENGINE_CLASS_INVALID_VIRTUAL is from our our tracepoints. Because the type of the former is u8 in contrast to u16 defined in the ABI, 254 will be returned instead of 65534 which userspace would legitimately expect. Another place is I915_CONTEXT_PARAM_ENGINES. Therefore we need to align the type used to store engine ABI class and instance. v2: * Update the commit message mentioning get_engines and cc stable. (Chris) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: 6d06779e8672 ("drm/i915: Load balancing across a virtual engine") Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: <stable@vger.kernel.org> # v5.3+ Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200116134508.25211-1-tvrtko.ursulin@linux.intel.com (cherry picked from commit 0b3bd0cdc329a1e2e00995cffd61aacf58c87cb4) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2020-01-21drm/i915/userptr: fix size calculationMatthew Auld
If we create a rather large userptr object(e.g 1ULL << 32) we might shift past the type-width of num_pages: (int)num_pages << PAGE_SHIFT, resulting in a totally bogus sg_table, which fortunately will eventually manifest as: gen8_ppgtt_insert_huge:463 GEM_BUG_ON(iter->sg->length < page_size) kernel BUG at drivers/gpu/drm/i915/gt/gen8_ppgtt.c:463! v2: more unsigned long prefer I915_GTT_PAGE_SIZE Fixes: 5cc9ed4b9a7a ("drm/i915: Introduce mapping of user pages into video memory (userptr) ioctl") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200117132413.1170563-2-matthew.auld@intel.com (cherry picked from commit 8e78871bc1e5efec22c950d3fd24ddb63d4ff28a) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
2020-01-21ALSA: hda/hdmi - add retry logic to parse_intel_hdmi()Kai Vehmanen
The initial snd_hda_get_sub_node() can fail on certain devices (e.g. some Chromebook models using Intel GLK). The failure rate is very low, but as this is is part of the probe process, end-user impact is high. In observed cases, related hardware status registers have expected values, but the node query still fails. Retrying the node query does seem to help, so fix the problem by adding retry logic to the query. This does not impact non-Intel platforms. BugLink: https://github.com/thesofproject/linux/issues/1642 Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com> Reviewed-by: Takashi Iwai <tiwai@suse.de> Link: https://lore.kernel.org/r/20200120160117.29130-4-kai.vehmanen@linux.intel.com Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-21ALSA: hda: No preallocation on x86 platformsTakashi Iwai
Like many other drivers, HD-audio drivers also do PCM buffer preallocation to assure the buffer pages allocated at the early boot stage. This step is useful for platforms that may fail to allocate the PCM hardware buffers -- which is mostly for either large continuous pages or with the specific DMA mask (like emu10k1). OTOH, when a buffer is allocated as SG-buffer and the DMA mask is either 32 or 64 bits, the allocation almost never fails unless it hits the real OOM situation. In such a case, we don't need the preallocation inevitably unlike the cases above. That said, we may drop the preallocation for HD-audio that does allocate via SG-buffers, and the patch achieves it. However, there is one caveat: the buffer allocation behavior depends on CONFIG_SND_DMA_SGBUF, and it falls back to the continuous pages when it's not set. And, currently this SG buffer allocation is enabled only on x86 platforms. So, covering those fall-outs, the patch adjusts CONFIG_SND_HDA_PREALLOC_SIZE depending on the condition, and keeps the old behavior as-is for non-x86 platforms. On x86, the kconfig item is no longer adjustable but always set to zero for disabling the preallocation. You can still enable the preallocation via procfs interface at any time later, too. Link: https://lore.kernel.org/r/20200120124423.11862-2-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-21ALSA: pcm: Set per-card upper limit of PCM buffer allocationsTakashi Iwai
Currently, the available buffer allocation size for a PCM stream depends on the preallocated size; when a buffer has been preallocated, the max buffer size is set to that size, so that application won't re-allocate too much memory. OTOH, when no preallocation is done, each substream may allocate arbitrary size of buffers as long as snd_pcm_hardware.buffer_bytes_max allows -- which can be quite high, HD-audio sets 1GB there. It means that the system may consume a high amount of pages for PCM buffers, and they are pinned and never swapped out. This can lead to OOM easily. For avoiding such a situation, this patch adds the upper limit per card. Each snd_pcm_lib_malloc_pages() and _free_pages() calls are tracked and it will return an error if the total amount of buffers goes over the defined upper limit. The default value is set to 32MB, which should be really large enough for usual operations. If larger buffers are needed for any specific usage, it can be adjusted (also dynamically) via snd_pcm.max_alloc_per_card option. Setting zero there means no chceck is performed, and again, unlimited amount of buffers are allowed. Link: https://lore.kernel.org/r/20200120124423.11862-1-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-01-21dmaengine: ti: k3-udma: Add glue layer for non DMAengine usersGrygorii Strashko
Certain users can not use right now the DMAengine API due to missing features in the core. Prime example is Networking. These users can use the glue layer interface to avoid misuse of DMAengine API and when the core gains the needed features they can be converted to use generic API. The most prominent features the glue layer clients are depending on: - most PSI-L native peripheral use extra rflow ranges on a receive channel and depending on the peripheral's configuration packets from a single free descriptor ring is going to be received to different receive ring - it is also possible to have different free descriptor rings per rflow and an rflow can also support 4 additional free descriptor ring based on the size of the incoming packet - out of order completion of descriptors on a channel - when we have several queues to handle different priority packets the descriptors will be completed 'out-of-order' - the notion of prep_slave_sg is not matching with what the streaming type of operation is demanding for networking - Streaming type of operation - Ability to fill the free descriptor ring with descriptors in anticipation of incoming traffic and when a packet arrives UDMAP will form a packet and gives it to the client driver - the descriptors are not backed with exact size data buffers as we don't know the size of the packet we will receive, but as a generic pool of buffers to be used by the receive channel - NAPI type of operation (polling instead of interrupt driven transfer) - without this we can not sustain gigabit speeds and we need to support NAPI - not to limit this to networking, but other high performance operations Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Tested-by: Keerthy <j-keerthy@ti.com> Link: https://lore.kernel.org/r/20191223110458.30766-12-peter.ujfalusi@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21dmaengine: ti: New driver for K3 UDMAPeter Ujfalusi
Split patch for review containing: defines, structs, io and low level functions and interrupt callbacks. DMA driver for Texas Instruments K3 NAVSS Unified DMA – Peripheral Root Complex (UDMA-P) The UDMA-P is intended to perform similar (but significantly upgraded) functions as the packet-oriented DMA used on previous SoC devices. The UDMA-P module supports the transmission and reception of various packet types. The UDMA-P is architected to facilitate the segmentation and reassembly of SoC DMA data structure compliant packets to/from smaller data blocks that are natively compatible with the specific requirements of each connected peripheral. Multiple Tx and Rx channels are provided within the DMA which allow multiple segmentation or reassembly operations to be ongoing. The DMA controller maintains state information for each of the channels which allows packet segmentation and reassembly operations to be time division multiplexed between channels in order to share the underlying DMA hardware. An external DMA scheduler is used to control the ordering and rate at which this multiplexing occurs for Transmit operations. The ordering and rate of Receive operations is indirectly controlled by the order in which blocks are pushed into the DMA on the Rx PSI-L interface. The UDMA-P also supports acting as both a UTC and UDMA-C for its internal channels. Channels in the UDMA-P can be configured to be either Packet-Based or Third-Party channels on a channel by channel basis. The initial driver supports: - MEM_TO_MEM (TR mode) - DEV_TO_MEM (Packet / TR mode) - MEM_TO_DEV (Packet / TR mode) - Cyclic (Packet / TR mode) - Metadata for descriptors Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Tested-by: Keerthy <j-keerthy@ti.com> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Link: https://lore.kernel.org/r/20191223110458.30766-11-peter.ujfalusi@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21dt-bindings: dma: ti: Add document for K3 UDMAPeter Ujfalusi
New binding document for Texas Instruments K3 NAVSS Unified DMA – Peripheral Root Complex (UDMA-P). UDMA-P is introduced as part of the K3 architecture and can be found in AM654 and j721e. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Reviewed-by: Rob Herring <robh@kernel.org> Tested-by: Keerthy <j-keerthy@ti.com> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Link: https://lore.kernel.org/r/20191223110458.30766-10-peter.ujfalusi@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21dmaengine: ti: k3 PSI-L remote endpoint configurationPeter Ujfalusi
In K3 architecture the DMA operates within threads. One end of the thread is UDMAP, the other is on the peripheral side. The UDMAP channel configuration depends on the needs of the remote endpoint and it can be differ from peripheral to peripheral. This patch adds database for am654 and j721e and small API to fetch the PSI-L endpoint configuration from the database which should only used by the DMA driver(s). Another API is added for native peripherals to give possibility to pass new configuration for the threads they are using, which is needed to be able to handle changes caused by different firmware loaded for the peripheral for example. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Tested-by: Keerthy <j-keerthy@ti.com> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Link: https://lore.kernel.org/r/20191223110458.30766-9-peter.ujfalusi@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21dmaengine: ti: Add cppi5 header for K3 NAVSS/UDMAPeter Ujfalusi
The K3 DMA architecture uses CPPI5 (Communications Port Programming Interface) specified descriptors over PSI-L bus within NAVSS. The header provides helpers, macros to work with these descriptors in a consistent way. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Tested-by: Keerthy <j-keerthy@ti.com> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Link: https://lore.kernel.org/r/20191223110458.30766-8-peter.ujfalusi@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21dmaengine: Add helper function to convert direction value to textPeter Ujfalusi
dmaengine_get_direction_text() can be useful when the direction is printed out. The text is easier to comprehend than the number. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Link: https://lore.kernel.org/r/20191223110458.30766-7-peter.ujfalusi@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21dmaengine: Add support for reporting DMA cached data amountPeter Ujfalusi
A DMA hardware can have big cache or FIFO and the amount of data sitting in the DMA fabric can be an interest for the clients. For example in audio we want to know the delay in the data flow and in case the DMA have significantly large FIFO/cache, it can affect the latenc/delay Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Reviewed-by: Tero Kristo <t-kristo@ti.com> Tested-by: Keerthy <j-keerthy@ti.com> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Link: https://lore.kernel.org/r/20191223110458.30766-6-peter.ujfalusi@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21dmaengine: Add metadata_ops for dma_async_tx_descriptorPeter Ujfalusi
The metadata is best described as side band data or parameters traveling alongside the data DMAd by the DMA engine. It is data which is understood by the peripheral and the peripheral driver only, the DMA engine see it only as data block and it is not interpreting it in any way. The metadata can be different per descriptor as it is a parameter for the data being transferred. If the DMA supports per descriptor metadata it can implement the attach, get_ptr/set_len callbacks. Client drivers must only use either attach or get_ptr/set_len to avoid misconfiguration. Client driver can check if a given metadata mode is supported by the channel during probe time with dmaengine_is_metadata_mode_supported(chan, DESC_METADATA_CLIENT); dmaengine_is_metadata_mode_supported(chan, DESC_METADATA_ENGINE); and based on this information can use either mode. Wrappers are also added for the metadata_ops. To be used in DESC_METADATA_CLIENT mode: dmaengine_desc_attach_metadata() To be used in DESC_METADATA_ENGINE mode: dmaengine_desc_get_metadata_ptr() dmaengine_desc_set_metadata_len() Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Reviewed-by: Tero Kristo <t-kristo@ti.com> Tested-by: Keerthy <j-keerthy@ti.com> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Link: https://lore.kernel.org/r/20191223110458.30766-5-peter.ujfalusi@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21dmaengine: doc: Add sections for per descriptor metadata supportPeter Ujfalusi
Update the provider and client documentation with details about the metadata support. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Reviewed-by: Tero Kristo <t-kristo@ti.com> Tested-by: Keerthy <j-keerthy@ti.com> Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com> Link: https://lore.kernel.org/r/20191223110458.30766-4-peter.ujfalusi@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21Merge TI ringacc driver from SantoshVinod Koul
This is for dependency of new TI ringacc dmaengine drivers Merge tag 'drivers_soc_for_5.6' into topic/ti SOC: TI Keystone Ring Accelerator driver The Ring Accelerator (RINGACC or RA) provides hardware acceleration to enable straightforward passing of work between a producer and a consumer. There is one RINGACC module per NAVSS on TI AM65x SoCs. Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-01-21scsi: RDMA/isert: Fix a recently introduced regression related to logoutBart Van Assche
iscsit_close_connection() calls isert_wait_conn(). Due to commit e9d3009cb936 both functions call target_wait_for_sess_cmds() although that last function should be called only once. Fix this by removing the target_wait_for_sess_cmds() call from isert_wait_conn() and by only calling isert_wait_conn() after target_wait_for_sess_cmds(). Fixes: e9d3009cb936 ("scsi: target: iscsi: Wait for all commands to finish before freeing a session"). Link: https://lore.kernel.org/r/20200116044737.19507-1-bvanassche@acm.org Reported-by: Rahul Kundu <rahul.kundu@chelsio.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Acked-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2020-01-20scsi: fnic: do not queue commands during fwresetHannes Reinecke
When a link is going down the driver will be calling fnic_cleanup_io(), which will traverse all commands and calling 'done' for each found command. While the traversal is handled under the host_lock, calling 'done' happens after the host_lock is being dropped. As fnic_queuecommand_lck() is being called with the host_lock held, it might well be that it will pick the command being selected for abortion from the above routine and enqueue it for sending, but then 'done' is being called on that very command from the above routine. Which of course confuses the hell out of the scsi midlayer. So fix this by not queueing commands when fnic_cleanup_io is active. Link: https://lore.kernel.org/r/20200116102053.62755-1-hare@suse.de Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2020-01-20Input: pm8xxx-vib - fix handling of separate enable registerStephan Gerhold
Setting the vibrator enable_mask is not implemented correctly: For regmap_update_bits(map, reg, mask, val) we give in either regs->enable_mask or 0 (= no-op) as mask and "val" as value. But "val" actually refers to the vibrator voltage control register, which has nothing to do with the enable_mask. So we usually end up doing nothing when we really wanted to enable the vibrator. We want to set or clear the enable_mask (to enable/disable the vibrator). Therefore, change the call to always modify the enable_mask and set the bits only if we want to enable the vibrator. Fixes: d4c7c5c96c92 ("Input: pm8xxx-vib - handle separate enable register") Signed-off-by: Stephan Gerhold <stephan@gerhold.net> Link: https://lore.kernel.org/r/20200114183442.45720-1-stephan@gerhold.net Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2020-01-20Documentation: update adfs filesystem documentationRussell King
Add an introduction to adfs to its documentation detailing which formats are supported by the module. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: mostly divorse inode number from indirect disc addressRussell King
Avoid using the inode number as the indirect disc address, even though these currently have the same value. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: super: add support for E and E+ floppy image formatsRussell King
Add support for ADFS E and E+ floppy image formats, which, unlike their hard disk variants, do not have a filesystem boot block - they have a single map zone, with the map fragment stored at sector 0. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: super: extract filesystem block probeRussell King
Separate the filesystem block probing from the superblock filling so we can support other ADFS filesystem formats, such as the single-zone E and E+ floppy image formats which do not have a boot block. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: dir: remove debug in adfs_dir_update()Russell King
Remove the noisy debug in adfs_dir_update(). Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: super: fix inode droppingRussell King
When we have write support enabled, we must not drop inodes before they have been written back, otherwise we lose updates to the filesystem on umount. Keep the inodes around unless we are built in read-only mode, or we are mounted read-only. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: bigdir: implement directory update supportRussell King
Implement big directory entry update support in the same way that we do for new directories. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: bigdir: calculate and validate directory checkbyteRussell King
When reading a big directory, calculate the validate the directory checkbyte to ensure that the directory contents are valid. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: bigdir: directory validation strengtheningRussell King
Strengthen the directory validation by ensuring that the header fields contain sensible values that fit inside the directory, and limit the directory size to 4MB as per RISC OS requirements. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: bigdir: extract directory validationRussell King
Extract the directory validation from the directory reading function as we will want to re-use this code. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: bigdir: factor out directory entry offset calculationRussell King
Factor out the directory entry byte offset calculation. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: newdir: split out directory commit from updateRussell King
After changing a directory, we need to update the sequence numbers and calculate the new check byte before the directory is scheduled to be written back to the media. Since this needs to happen for any change to the directory, move this into a separate method. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: newdir: clean up adfs_f_update()Russell King
__adfs_dir_put() and adfs_dir_find_entry() are only called from adfs_f_update(), so move them into this function, removing some unnecessary entry copying by doing so. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: newdir: merge adfs_dir_read() into adfs_f_read()Russell King
adfs_dir_read() is only called from adfs_f_read(), so merge it into that function. As new directories are always 2048 bytes in size, (which we rely on elsewhere) we can consolidate some of the code. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: newdir: improve directory validationRussell King
Check that the lastmask and reserved fields are all zero, as per the documentation. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: newdir: factor out directory format validationRussell King
We have two locations where we validate the new directory format, so factor this out to a helper. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: dir: use pointers to access directory head/tailsRussell King
Add and use pointers in the adfs_dir structure to access the directory head and tail structures, which will always be contiguous in a buffer. This allows us to avoid memcpy()ing the data in the new directory code, making it slightly more efficient. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: dir: add more efficient iterate() per-format methodRussell King
Rather than using setpos + getnext to iterate through the directory entries, pass iterate() down to the dir format code to populate the dirents. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: dir: switch to iterate_shared methodRussell King
There is nothing in our readdir (aka iterate) method that relies on the directory inode being exclusively locked, so switch to using the iterate_shared() hook rather than iterate(). Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: dir: improve compiler coverage in adfs_dir_updateRussell King
Get rid of the ifdef, using IS_ENABLED() instead to detect whether the code should be callable. This allows the compiler to always parse the following code, reducing the chances of errors being missed. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-01-20fs/adfs: dir: improve update failure handlingRussell King
When we update a directory, a number of errors may happen. If we failed to find the entry to update, we can just release the directory buffers as normal. However, if we have some other error, we may have partially updated the buffers, resulting in an invalid directory. In this case, we need to discard the buffers to avoid writing the contents back to the media, and later re-read the directory from the media. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>