summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-10-26btrfs: defrag: introduce helper to defrag one clusterQu Wenruo
This new helper, defrag_one_cluster(), will defrag one cluster (at most 256K): - Collect all initial targets - Kick in readahead when possible - Call defrag_one_range() on each initial target With some extra range clamping. - Update @sectors_defragged parameter This involves one behavior change, the defragged sectors accounting is no longer as accurate as old behavior, as the initial targets are not consistent. We can have new holes punched inside the initial target, and we will skip such holes later. But the defragged sectors accounting doesn't need to be that accurate anyway, thus I don't want to pass those extra accounting burden into defrag_one_range(). Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26btrfs: defrag: introduce helper to defrag a rangeQu Wenruo
A new helper, defrag_one_range(), is introduced to defrag one range. This function will mostly prepare the needed pages and extent status for defrag_one_locked_target(). As we can only have a consistent view of extent map with page and extent bits locked, we need to re-check the range passed in to get a real target list for defrag_one_locked_target(). Since defrag_collect_targets() will call defrag_lookup_extent() and lock extent range, we also need to teach those two functions to skip extent lock. Thus new parameter, @locked, is introduced to skip extent lock if the caller has already locked the range. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26btrfs: defrag: introduce helper to defrag a contiguous prepared rangeQu Wenruo
A new helper, defrag_one_locked_target(), introduced to do the real part of defrag. The caller needs to ensure both page and extents bits are locked, and no ordered extent exists for the range, and all writeback is finished. The core defrag part is pretty straight-forward: - Reserve space - Set extent bits to defrag - Update involved pages to be dirty Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26btrfs: defrag: introduce helper to collect target file extentsQu Wenruo
Introduce a helper, defrag_collect_targets(), to collect all possible targets to be defragged. This function will not consider things like max_sectors_to_defrag, thus caller should be responsible to ensure we don't exceed the limit. This function will be the first stage of later defrag rework. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26btrfs: defrag: factor out page preparation into a helperQu Wenruo
In cluster_pages_for_defrag(), we have complex code block inside one for() loop. The code block is to prepare one page for defrag, this will ensure: - The page is locked and set up properly. - No ordered extent exists in the page range. - The page is uptodate. This behavior is pretty common and will be reused by later defrag rework. So factor out the code into its own helper, defrag_prepare_one_page(), for later usage, and cleanup the code by a little. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26btrfs: defrag: replace hard coded PAGE_SIZE with sectorsizeQu Wenruo
When testing subpage defrag support, I always find some strange inode nbytes error, after a lot of debugging, it turns out that defrag_lookup_extent() is using PAGE_SIZE as size for lookup_extent_mapping(). Since lookup_extent_mapping() is calling __lookup_extent_mapping() with @strict == 1, this means any extent map smaller than one page will be ignored, prevent subpage defrag to grab a correct extent map. There are quite some PAGE_SIZE usage in ioctl.c, but most of them are correct usages, and can be one of the following cases: - ioctl structure size check We want ioctl structure to be contained inside one page. - real page operations The remaining cases in defrag_lookup_extent() and check_defrag_in_cache() will be addressed in this patch. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26btrfs: defrag: also check PagePrivate for subpage cases in ↵Qu Wenruo
cluster_pages_for_defrag() In function cluster_pages_for_defrag() we have a window where we unlock page, either start the ordered range or read the content from disk. When we re-lock the page, we need to make sure it still has the correct page->private for subpage. Thus add the extra PagePrivate check here to handle subpage cases properly. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26btrfs: defrag: pass file_ra_state instead of file to btrfs_defrag_file()Qu Wenruo
Currently btrfs_defrag_file() accepts both "struct inode" and "struct file" as parameter. We can easily grab "struct inode" from "struct file" using file_inode() helper. The reason why we need "struct file" is just to re-use its f_ra. Change this to pass "struct file_ra_state" parameter, so that it's more clear what we really want. Since we're here, also add some comments on the function btrfs_defrag_file(). Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26btrfs: rename and switch to bool btrfs_chunk_readonlyAnand Jain
btrfs_chunk_readonly() checks if the given chunk is writeable. It returns 1 for readonly, and 0 for writeable. So the return argument type bool shall suffice instead of the current type int. Also, rename btrfs_chunk_readonly() to btrfs_chunk_writeable() as we check if the bg is writeable, and helps to keep the logic at the parent function simpler to understand. Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26btrfs: reflink: initialize return value to 0 in btrfs_extent_same()Sidong Yang
Fix a warning reported by smatch that ret could be returned without initialized. The dedupe operations are supposed to to return 0 for a 0 length range but the caller does not pass olen == 0. To keep this behaviour and also fix the warning initialize ret to 0. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Sidong Yang <realwakka@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26btrfs: subpage: pack all subpage bitmaps into a larger bitmapQu Wenruo
Currently we use u16 bitmap to make 4k sectorsize work for 64K page size. But this u16 bitmap is not large enough to contain larger page size like 128K, nor is space efficient for 16K page size. To handle both cases, here we pack all subpage bitmaps into a larger bitmap, now btrfs_subpage::bitmaps[] will be the ultimate bitmap for subpage usage. Each sub-bitmap will has its start bit number recorded in btrfs_subpage_info::*_start, and its bitmap length will be recorded in btrfs_subpage_info::bitmap_nr_bits. All subpage bitmap operations will be converted from using direct u16 operations to bitmap operations, with above *_start calculated. For 64K page size with 4K sectorsize, this should not cause much difference. While for 16K page size, we will only need 1 unsigned long (u32) to store all the bitmaps, which saves quite some space. Furthermore, this allows us to support larger page size like 128K and 258K. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2021-10-26fs: remove leftover comments from mandatory locking removalJeff Layton
Stragglers from commit f7e33bdbd6d1 ("fs: remove mandatory file locking support"). Signed-off-by: Jeff Layton <jlayton@kernel.org>
2021-10-26MAINTAINERS: drop obsolete file pattern in SDHCI DRIVER sectionLukas Bulwahn
Commit 5c67aa59bd8f ("mmc: sdhci-pci: Remove dead code (struct sdhci_pci_data et al)") removes ./include/linux/mmc/sdhci-pci-data.h; so, there is no further file that matches 'include/linux/mmc/sdhci*'. Hence, ./scripts/get_maintainer.pl --self-test=patterns complains: warning: no file matches F: include/linux/mmc/sdhci* Drop this obsolete file pattern in SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) DRIVER. Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Link: https://lore.kernel.org/r/20211022054740.25222-1-lukas.bulwahn@gmail.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-10-26mmc: sdhci-esdhc-imx: add NXP S32G2 supportChester Lin
Support the SDHCI controller found on NXP S32G2 platform. The new flag ESDHC_FLAG_SKIP_ERR004536 is used because the hardware erratum bit is not applicable for S32G2. Signed-off-by: Chester Lin <clin@suse.com> Link: https://lore.kernel.org/r/20211021071333.32485-3-clin@suse.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-10-26dt-bindings: mmc: fsl-imx-esdhc: add NXP S32G2 supportChester Lin
Add support for the SDHC binding of S32G2. Signed-off-by: Chester Lin <clin@suse.com> Reviewed-by: Andreas Färber <afaerber@suse.de> Link: https://lore.kernel.org/r/20211021071333.32485-2-clin@suse.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-10-26Merge branch 'fixes' into nextUlf Hansson
2021-10-26mmc: cqhci: clear HALT state after CQE enableWenbin Mei
While mmc0 enter suspend state, we need halt CQE to send legacy cmd(flush cache) and disable cqe, for resume back, we enable CQE and not clear HALT state. In this case MediaTek mmc host controller will keep the value for HALT state after CQE disable/enable flow, so the next CQE transfer after resume will be timeout due to CQE is in HALT state, the log as below: <4>.(4)[318:kworker/4:1H]mmc0: cqhci: timeout for tag 2 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: ============ CQHCI REGISTER DUMP =========== <4>.(4)[318:kworker/4:1H]mmc0: cqhci: Caps: 0x100020b6 | Version: 0x00000510 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: Config: 0x00001103 | Control: 0x00000001 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: Int stat: 0x00000000 | Int enab: 0x00000006 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: Int sig: 0x00000006 | Int Coal: 0x00000000 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: TDL base: 0xfd05f000 | TDL up32: 0x00000000 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: Doorbell: 0x8000203c | TCN: 0x00000000 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: Dev queue: 0x00000000 | Dev Pend: 0x00000000 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: Task clr: 0x00000000 | SSC1: 0x00001000 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: SSC2: 0x00000001 | DCMD rsp: 0x00000000 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: RED mask: 0xfdf9a080 | TERRI: 0x00000000 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: Resp idx: 0x00000000 | Resp arg: 0x00000000 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: CRNQP: 0x00000000 | CRNQDUN: 0x00000000 <4>.(4)[318:kworker/4:1H]mmc0: cqhci: CRNQIS: 0x00000000 | CRNQIE: 0x00000000 This change check HALT state after CQE enable, if CQE is in HALT state, we will clear it. Signed-off-by: Wenbin Mei <wenbin.mei@mediatek.com> Cc: stable@vger.kernel.org Acked-by: Adrian Hunter <adrian.hunter@intel.com> Fixes: a4080225f51d ("mmc: cqhci: support for command queue enabled host") Link: https://lore.kernel.org/r/20211026070812.9359-1-wenbin.mei@mediatek.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-10-26mmc: vub300: fix control-message timeoutsJohan Hovold
USB control-message timeouts are specified in milliseconds and should specifically not vary with CONFIG_HZ. Fixes: 88095e7b473a ("mmc: Add new VUB300 USB-to-SD/SDIO/MMC driver") Cc: stable@vger.kernel.org # 3.0 Signed-off-by: Johan Hovold <johan@kernel.org> Link: https://lore.kernel.org/r/20211025115608.5287-1-johan@kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-10-26mmc: dw_mmc: exynos: fix the finding clock sample valueJaehoon Chung
Even though there are candiates value if can't find best value, it's returned -EIO. It's not proper behavior. If there is not best value, use a first candiate value to work eMMC. Signed-off-by: Jaehoon Chung <jh80.chung@samsung.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Christian Hewitt <christianshewitt@gmail.com> Cc: stable@vger.kernel.org Fixes: c537a1c5ff63 ("mmc: dw_mmc: exynos: add variable delay tuning sequence") Link: https://lore.kernel.org/r/20211022082106.1557-1-jh80.chung@samsung.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2021-10-26MAINTAINERS: Add maintainers for DHCOM i.MX6 and DHCOM/DHCOR STM32MP1Christoph Niedermaier
Add maintainers for DH electronics DHCOM i.MX6 and DHCOM/DHCOR STM32MP1 boards. Signed-off-by: Christoph Niedermaier <cniedermaier@dh-electronics.com> Cc: linux-arm-kernel@lists.infradead.org Cc: kernel@dh-electronics.com Cc: arnd@arndb.de Link: https://lore.kernel.org/r/20211025073706.2794-1-cniedermaier@dh-electronics.com' To: soc@kernel.org To: linux-kernel@vger.kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2021-10-26block: drain queue after disk is removed from sysfsMing Lei
Before removing disk from sysfs, userspace still may change queue via sysfs, such as switching elevator or setting wbt latency, both may reinitialize wbt, then the warning in blk_free_queue_stats() will be triggered since rq_qos_exit() is moved to del_gendisk(). Fixes the issue by moving draining queue & tearing down after disk is removed from sysfs, at that time no one can come into queue's store()/show(). Reported-by: Yi Zhang <yi.zhang@redhat.com> Tested-by: Yi Zhang <yi.zhang@redhat.com> Fixes: 8e141f9eb803 ("block: drain file system I/O on del_gendisk") Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20211026101204.2897166-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-26blk-mq: don't issue request directly in case that current is to be blockedMing Lei
When flushing plug list in case that current will be blocked, we can't issue request directly because ->queue_rq() may sleep, otherwise scheduler may complain. Fixes: dc5fc361d891 ("block: attempt direct issue of plug list") Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20211026082257.2889890-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-26Merge tag 'qcom-arm64-fixes-for-5.15-2' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux into arm/fixes Qualcomm ARM64 DTS one more fix for 5.15 This reverts a clock change in the Qualcomm RB5 devicetree which in some combinations of firmware and configuration causes the device to crash during boot. Data on an adjacent platform indicates that this is probably not be the root cause of the problem, but this resolves the regression seen on RB5 and will allow the SM8250 platform to boot v5.15. * tag 'qcom-arm64-fixes-for-5.15-2' of git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux: Revert "arm64: dts: qcom: sm8250: remove bus clock from the mdss node for sm8250 target" Link: https://lore.kernel.org/r/20211025201213.1145348-1-bjorn.andersson@linaro.org Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2021-10-26MAINTAINERS: please remove myself from the Prestera driverVadym Kochan
Signed-off-by: Vadym Kochan <vkochan@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-26net: lan78xx: fix division by zero in send pathJohan Hovold
Add the missing endpoint max-packet sanity check to probe() to avoid division by zero in lan78xx_tx_bh() in case a malicious device has broken descriptors (or when doing descriptor fuzz testing). Note that USB core will reject URBs submitted for endpoints with zero wMaxPacketSize but that drivers doing packet-size calculations still need to handle this (cf. commit 2548288b4fb0 ("USB: Fix: Don't skip endpoint descriptors with maxpacket=0")). Fixes: 55d7de9de6c3 ("Microchip's LAN7800 family USB 2/3 to 10/100/1000 Ethernet device driver") Cc: stable@vger.kernel.org # 4.3 Cc: Woojung.Huh@microchip.com <Woojung.Huh@microchip.com> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-26net: batman-adv: fix error handlingPavel Skripkin
Syzbot reported ODEBUG warning in batadv_nc_mesh_free(). The problem was in wrong error handling in batadv_mesh_init(). Before this patch batadv_mesh_init() was calling batadv_mesh_free() in case of any batadv_*_init() calls failure. This approach may work well, when there is some kind of indicator, which can tell which parts of batadv are initialized; but there isn't any. All written above lead to cleaning up uninitialized fields. Even if we hide ODEBUG warning by initializing bat_priv->nc.work, syzbot was able to hit GPF in batadv_nc_purge_paths(), because hash pointer in still NULL. [1] To fix these bugs we can unwind batadv_*_init() calls one by one. It is good approach for 2 reasons: 1) It fixes bugs on error handling path 2) It improves the performance, since we won't call unneeded batadv_*_free() functions. So, this patch makes all batadv_*_init() clean up all allocated memory before returning with an error to no call correspoing batadv_*_free() and open-codes batadv_mesh_free() with proper order to avoid touching uninitialized fields. Link: https://lore.kernel.org/netdev/000000000000c87fbd05cef6bcb0@google.com/ [1] Reported-and-tested-by: syzbot+28b0702ada0bf7381f58@syzkaller.appspotmail.com Fixes: c6c8fea29769 ("net: Add batman-adv meshing protocol") Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Acked-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-26tracing/hwlat: Make some internal symbols staticWang ShaoBo
The sparse tool complains as follows: kernel/trace/trace_hwlat.c:82:27: warning: symbol 'hwlat_single_cpu_data' was not declared. Should it be static? kernel/trace/trace_hwlat.c:83:1: warning: symbol '__pcpu_scope_hwlat_per_cpu_data' was not declared. Should it be static? This symbol is not used outside of trace_hwlat.c, so this commit marks it static. Link: https://lkml.kernel.org/r/20211021035225.1050685-1-bobo.shaobowang@huawei.com Signed-off-by: Wang ShaoBo <bobo.shaobowang@huawei.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-10-26tracing: Fix missing trace_boot_init_histograms kstrdup NULL checksMathieu Desnoyers
trace_boot_init_histograms misses NULL pointer checks for kstrdup failure. Link: https://lkml.kernel.org/r/20211015195550.22742-1-mathieu.desnoyers@efficios.com Fixes: 64dc7f6958ef5 ("tracing/boot: Show correct histogram error command") Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-10-26tipc: fix size validations for the MSG_CRYPTO typeMax VA
The function tipc_crypto_key_rcv is used to parse MSG_CRYPTO messages to receive keys from other nodes in the cluster in order to decrypt any further messages from them. This patch verifies that any supplied sizes in the message body are valid for the received message. Fixes: 1ef6f7c9390f ("tipc: add automatic session key exchange") Signed-off-by: Max VA <maxv@sentinelone.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Jon Maloy <jmaloy@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-26nfc: port100: fix using -ERRNO as command type maskKrzysztof Kozlowski
During probing, the driver tries to get a list (mask) of supported command types in port100_get_command_type_mask() function. The value is u64 and 0 is treated as invalid mask (no commands supported). The function however returns also -ERRNO as u64 which will be interpret as valid command mask. Return 0 on every error case of port100_get_command_type_mask(), so the probing will stop. Cc: <stable@vger.kernel.org> Fixes: 0347a6ab300a ("NFC: port100: Commands mechanism implementation") Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-26Merge branch '100GbE' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2021-10-25 This series contains updates to ice driver only. Dave adds event handler for LAG NETDEV_UNREGISTER to unlink device from link aggregate. Yongxin Liu adds a check for PTP support during release which would cause a call trace on non-PTP supported devices. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-26net: multicast: calculate csum of looped-back and forwarded packetsCyril Strejc
During a testing of an user-space application which transmits UDP multicast datagrams and utilizes multicast routing to send the UDP datagrams out of defined network interfaces, I've found a multicast router does not fill-in UDP checksum into locally produced, looped-back and forwarded UDP datagrams, if an original output NIC the datagrams are sent to has UDP TX checksum offload enabled. The datagrams are sent malformed out of the NIC the datagrams have been forwarded to. It is because: 1. If TX checksum offload is enabled on the output NIC, UDP checksum is not calculated by kernel and is not filled into skb data. 2. dev_loopback_xmit(), which is called solely by ip_mc_finish_output(), sets skb->ip_summed = CHECKSUM_UNNECESSARY unconditionally. 3. Since 35fc92a9 ("[NET]: Allow forwarding of ip_summed except CHECKSUM_COMPLETE"), the ip_summed value is preserved during forwarding. 4. If ip_summed != CHECKSUM_PARTIAL, checksum is not calculated during a packet egress. The minimum fix in dev_loopback_xmit(): 1. Preserves skb->ip_summed CHECKSUM_PARTIAL. This is the case when the original output NIC has TX checksum offload enabled. The effects are: a) If the forwarding destination interface supports TX checksum offloading, the NIC driver is responsible to fill-in the checksum. b) If the forwarding destination interface does NOT support TX checksum offloading, checksums are filled-in by kernel before skb is submitted to the NIC driver. c) For local delivery, checksum validation is skipped as in the case of CHECKSUM_UNNECESSARY, thanks to skb_csum_unnecessary(). 2. Translates ip_summed CHECKSUM_NONE to CHECKSUM_UNNECESSARY. It means, for CHECKSUM_NONE, the behavior is unmodified and is there to skip a looped-back packet local delivery checksum validation. Signed-off-by: Cyril Strejc <cyril.strejc@skoda.cz> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-10-26spi: spl022: fix Microwire full duplex modeThomas Perrot
There are missing braces in the function that verify controller parameters, then an error is always returned when the parameter to select Microwire frames operation is used on devices allowing it. Signed-off-by: Thomas Perrot <thomas.perrot@bootlin.com> Link: https://lore.kernel.org/r/20211022142104.1386379-1-thomas.perrot@bootlin.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-10-26genirq: Hide irq_cpu_{on,off}line() behind a deprecated optionMarc Zyngier
irq_cpu_{on,off}line() are now only used by the Octeon platform. Make their use conditional on this plaform being enabled, and otherwise hidden away. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20211021170414.3341522-4-maz@kernel.org
2021-10-26irqchip/mips-gic: Get rid of the reliance on irq_cpu_online()Marc Zyngier
The MIPS GIC driver uses irq_cpu_online() to go and program the per-CPU interrupts. However, this method iterates over all IRQs in the system, despite only 3 per-CPU interrupts being of interest. Let's be terribly bold and do the iteration ourselves. To ensure mutual exclusion, hold the gic_lock spinlock that is otherwise taken while dealing with these interrupts. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20211021170414.3341522-3-maz@kernel.org
2021-10-26MIPS: loongson64: Drop call to irq_cpu_offline()Marc Zyngier
Also loongson64 calls irq_cpu_offline(), none of its interrupt controllers implement the .irq_cpu_offline callback. It is thus obvious that this call only serves the dubious purpose of wasting precious CPU cycles by iterating over all interrupts. Get rid of the call altogether. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20211021170414.3341522-2-maz@kernel.org
2021-10-26arm64/sve: Fix warnings when SVE is disabledMark Brown
In configurations where SVE is disabled we define but never reference the functions for retrieving the default vector length, causing warnings. Fix this by move the ifdef up, marking get_default_vl() inline since it is referenced from code guarded by an IS_ENABLED() check, and do the same for the other accessors for consistency. Reported-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20211022141635.2360415-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-10-26arm64/sve: Add stub for sve_max_virtualisable_vl()Mark Brown
Fixes build problems for configurations with KVM enabled but SVE disabled. Reported-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20211022141635.2360415-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-10-26irq: remove handle_domain_{irq,nmi}()Mark Rutland
Now that entry code handles IRQ entry (including setting the IRQ regs) before calling irqchip code, irqchip code can safely call generic_handle_domain_irq(), and there's no functional reason for it to call handle_domain_irq(). Let's cement this split of responsibility and remove handle_domain_irq() entirely, updating irqchip drivers to call generic_handle_domain_irq(). For consistency, handle_domain_nmi() is similarly removed and replaced with a generic_handle_domain_nmi() function which also does not perform any entry logic. Previously handle_domain_{irq,nmi}() had a WARN_ON() which would fire when they were called in an inappropriate context. So that we can identify similar issues going forward, similar WARN_ON_ONCE() logic is added to the generic_handle_*() functions, and comments are updated for clarity and consistency. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de>
2021-10-26irq: remove CONFIG_HANDLE_DOMAIN_IRQ_IRQENTRYMark Rutland
Now that all users of CONFIG_HANDLE_DOMAIN_IRQ perform the irq entry work themselves, we can remove the legacy CONFIG_HANDLE_DOMAIN_IRQ_IRQENTRY behaviour. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de>
2021-10-26irq: riscv: perform irqentry in entry codeMark Rutland
In preparation for removing HANDLE_DOMAIN_IRQ_IRQENTRY, have arch/riscv perform all the irqentry accounting in its entry code. As arch/riscv uses GENERIC_IRQ_MULTI_HANDLER, we can use generic_handle_arch_irq() to do so. Since generic_handle_arch_irq() handles the irq entry and setting the irq regs, and happens before the irqchip code calls handle_IPI(), we can remove the redundant irq entry and irq regs manipulation from handle_IPI(). There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Guo Ren <guoren@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Thomas Gleixner <tglx@linutronix.de>
2021-10-26irq: openrisc: perform irqentry in entry codeMark Rutland
In preparation for removing HANDLE_DOMAIN_IRQ_IRQENTRY, have arch/openrisc perform all the irqentry accounting in its entry code. As arch/openrisc uses GENERIC_IRQ_MULTI_HANDLER, we can use generic_handle_arch_irq() to do so. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Stafford Horne <shorne@gmail.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: Thomas Gleixner <tglx@linutronix.de>
2021-10-26irq: csky: perform irqentry in entry codeMark Rutland
In preparation for removing HANDLE_DOMAIN_IRQ_IRQENTRY, have arch/csky perform all the irqentry accounting in its entry code. As arch/csky uses GENERIC_IRQ_MULTI_HANDLER, we can use generic_handle_arch_irq() to do so. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Guo Ren <guoren@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de>
2021-10-26irq: arm64: perform irqentry in entry codeMark Rutland
In preparation for removing HANDLE_DOMAIN_IRQ_IRQENTRY, have arch/arm64 perform all the irqentry accounting in its entry code. As arch/arm64 already performs portions of the irqentry logic in enter_from_kernel_mode() and exit_to_kernel_mode(), including rcu_irq_{enter,exit}(), the only additional calls that need to be made are to irq_{enter,exit}_rcu(). Removing the calls to rcu_irq_{enter,exit}() from handle_domain_irq() ensures that we inform RCU once per IRQ entry and will correctly identify quiescent periods. Since we should not call irq_{enter,exit}_rcu() when entering a pseudo-NMI, el1_interrupt() is reworked to have separate __el1_irq() and __el1_pnmi() paths for regular IRQ and psuedo-NMI entry, with irq_{enter,exit}_irq() only called for the former. In preparation for removing HANDLE_DOMAIN_IRQ, the irq regs are managed in do_interrupt_handler() for both regular IRQ and pseudo-NMI. This is currently redundant, but not harmful. For clarity the preemption logic is moved into __el1_irq(). We should never preempt within a pseudo-NMI, and arm64_enter_nmi() already enforces this by incrementing the preempt_count, but it's clearer if we never invoke the preemption logic when entering a pseudo-NMI. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Pingfan Liu <kernelfans@gmail.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org>
2021-10-26x86/fpu/amx: Enable the AMX feature in 64-bit modeChang S. Bae
Add the AMX state components in XFEATURE_MASK_USER_SUPPORTED and the TILE_DATA component to the dynamic states and update the permission check table accordingly. This is only effective on 64 bit kernels as for 32bit kernels XFEATURE_MASK_TILE is defined as 0. TILE_DATA is caller-saved state and the only dynamic state. Add build time sanity check to ensure the assumption that every dynamic feature is caller- saved. Make AMX state depend on XFD as it is dynamic feature. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211021225527.10184-24-chang.seok.bae@intel.com
2021-10-26x86/fpu: Add XFD handling for dynamic statesChang S. Bae
To handle the dynamic sizing of buffers on first use the XFD MSR has to be armed. Store the delta between the maximum available and the default feature bits in init_fpstate where it can be retrieved for task creation. If the delta is non zero then dynamic features are enabled. This needs also to enable the static key which guards the XFD updates. This is delayed to an initcall because the FPU setup runs before jump labels are initialized. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211021225527.10184-23-chang.seok.bae@intel.com
2021-10-26x86/fpu: Calculate the default sizes independentlyChang S. Bae
When dynamically enabled states are supported the maximum and default sizes for the kernel buffers and user space interfaces are not longer identical. Put the necessary calculations in place which only take the default enabled features into account. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211021225527.10184-22-chang.seok.bae@intel.com
2021-10-26x86/fpu/amx: Define AMX state components and have it used for boot-time checksChang S. Bae
The XSTATE initialization uses check_xstate_against_struct() to sanity check the size of XSTATE-enabled features. AMX is a XSAVE-enabled feature, and its size is not hard-coded but discoverable at run-time via CPUID. The AMX state is composed of state components 17 and 18, which are all user state components. The first component is the XTILECFG state of a 64-byte tile-related control register. The state component 18, called XTILEDATA, contains the actual tile data, and the state size varies on implementations. The architectural maximum, as defined in the CPUID(0x1d, 1): EAX[15:0], is a byte less than 64KB. The first implementation supports 8KB. Check the XTILEDATA state size dynamically. The feature introduces the new tile register, TMM. Define one register struct only and read the number of registers from CPUID. Cross-check the overall size with CPUID again. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211021225527.10184-21-chang.seok.bae@intel.com
2021-10-26x86/fpu/xstate: Prepare XSAVE feature table for gaps in state component numbersChang S. Bae
The kernel checks at boot time which features are available by walking a XSAVE feature table which contains the CPUID feature bit numbers which need to be checked whether a feature is available on a CPU or not. So far the feature numbers have been linear, but AMX will create a gap which the current code cannot handle. Make the table entries explicitly indexed and adjust the loop code accordingly to prepare for that. No functional change. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Len Brown <len.brown@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211021225527.10184-20-chang.seok.bae@intel.com
2021-10-26x86/fpu/xstate: Add fpstate_realloc()/free()Chang S. Bae
The fpstate embedded in struct fpu is the default state for storing the FPU registers. It's sized so that the default supported features can be stored. For dynamically enabled features the register buffer is too small. The #NM handler detects first use of a feature which is disabled in the XFD MSR. After handling permission checks it recalculates the size for kernel space and user space state and invokes fpstate_realloc() which tries to reallocate fpstate and install it. Provide the allocator function which checks whether the current buffer size is sufficient and if not allocates one. If allocation is successful the new fpstate is initialized with the new features and sizes and the now enabled features is removed from the task's XFD mask. realloc_fpstate() uses vzalloc(). If use of this mechanism grows to re-allocate buffers larger than 64KB, a more sophisticated allocation scheme that includes purpose-built reclaim capability might be justified. Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211021225527.10184-19-chang.seok.bae@intel.com