summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-01-04f2fs: remove redunant invalidate compress pagesFengnan Chang
Compress page will invalidate in truncate block process too, so remove redunant invalidate compress pages in f2fs_evict_inode. Signed-off-by: Fengnan Chang <changfengnan@vivo.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-01-04f2fs: Simplify bool conversionYang Li
Fix the following coccicheck warning: ./fs/f2fs/sysfs.c:491:41-46: WARNING: conversion to bool not needed here Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-01-04f2fs: don't drop compressed page cache in .{invalidate,release}pageChao Yu
For compressed inode, in .{invalidate,release}page, we will call f2fs_invalidate_compress_pages() to drop all compressed page cache of current inode. But we don't need to drop compressed page cache synchronously in .invalidatepage, because, all trancation paths of compressed physical block has been covered with f2fs_invalidate_compress_page(). And also we don't need to drop compressed page cache synchronously in .releasepage, because, if there is out-of-memory, we can count on page cache reclaim on sbi->compress_inode. BTW, this patch may fix the issue reported below: https://lore.kernel.org/linux-f2fs-devel/20211202092812.197647-1-changfengnan@vivo.com/T/#u Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-01-04f2fs: fix to reserve space for IO align featureChao Yu
https://bugzilla.kernel.org/show_bug.cgi?id=204137 With below script, we will hit panic during new segment allocation: DISK=bingo.img MOUNT_DIR=/mnt/f2fs dd if=/dev/zero of=$DISK bs=1M count=105 mkfs.f2fe -a 1 -o 19 -t 1 -z 1 -f -q $DISK mount -t f2fs $DISK $MOUNT_DIR -o "noinline_dentry,flush_merge,noextent_cache,mode=lfs,io_bits=7,fsync_mode=strict" for (( i = 0; i < 4096; i++ )); do name=`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 10` mkdir $MOUNT_DIR/$name done umount $MOUNT_DIR rm $DISK --- Core dump --- Call Trace: allocate_segment_by_default+0x9d/0x100 [f2fs] f2fs_allocate_data_block+0x3c0/0x5c0 [f2fs] do_write_page+0x62/0x110 [f2fs] f2fs_outplace_write_data+0x43/0xc0 [f2fs] f2fs_do_write_data_page+0x386/0x560 [f2fs] __write_data_page+0x706/0x850 [f2fs] f2fs_write_cache_pages+0x267/0x6a0 [f2fs] f2fs_write_data_pages+0x19c/0x2e0 [f2fs] do_writepages+0x1c/0x70 __filemap_fdatawrite_range+0xaa/0xe0 filemap_fdatawrite+0x1f/0x30 f2fs_sync_dirty_inodes+0x74/0x1f0 [f2fs] block_operations+0xdc/0x350 [f2fs] f2fs_write_checkpoint+0x104/0x1150 [f2fs] f2fs_sync_fs+0xa2/0x120 [f2fs] f2fs_balance_fs_bg+0x33c/0x390 [f2fs] f2fs_write_node_pages+0x4c/0x1f0 [f2fs] do_writepages+0x1c/0x70 __writeback_single_inode+0x45/0x320 writeback_sb_inodes+0x273/0x5c0 wb_writeback+0xff/0x2e0 wb_workfn+0xa1/0x370 process_one_work+0x138/0x350 worker_thread+0x4d/0x3d0 kthread+0x109/0x140 ret_from_fork+0x25/0x30 The root cause here is, with IO alignment feature enables, in worst case, we need F2FS_IO_SIZE() free blocks space for single one 4k write due to IO alignment feature will fill dummy pages to make IO being aligned. So we will easily run out of free segments during non-inline directory's data writeback, even in process of foreground GC. In order to fix this issue, I just propose to reserve additional free space for IO alignment feature to handle worst case of free space usage ratio during FGGC. Fixes: 0a595ebaaa6b ("f2fs: support IO alignment for DATA and NODE writes") Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-01-04f2fs: fix to check available space of CP area correctly in update_ckpt_flags()Chao Yu
Otherwise, nat_bit area may be persisted across boundary of CP area during nat_bit rebuilding. Fixes: 94c821fb286b ("f2fs: rebuild nat_bits during umount") Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-01-04f2fs: support fault injection to f2fs_trylock_op()Chao Yu
f2fs: support fault injection for f2fs_trylock_op() This patch supports to inject fault into f2fs_trylock_op(). Usage: a) echo 65536 > /sys/fs/f2fs/<dev>/inject_type or b) mount -o fault_type=65536 <dev> <mountpoint> Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-01-04f2fs: clean up __find_inline_xattr() with __find_xattr()Chao Yu
Just cleanup, no logic change. Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-01-04f2fs: fix to do sanity check on last xattr entry in __f2fs_setxattr()Chao Yu
As Wenqing Liu reported in bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215235 - Overview page fault in f2fs_setxattr() when mount and operate on corrupted image - Reproduce tested on kernel 5.16-rc3, 5.15.X under root 1. unzip tmp7.zip 2. ./single.sh f2fs 7 Sometimes need to run the script several times - Kernel dump loop0: detected capacity change from 0 to 131072 F2FS-fs (loop0): Found nat_bits in checkpoint F2FS-fs (loop0): Mounted with checkpoint version = 7548c2ee BUG: unable to handle page fault for address: ffffe47bc7123f48 RIP: 0010:kfree+0x66/0x320 Call Trace: __f2fs_setxattr+0x2aa/0xc00 [f2fs] f2fs_setxattr+0xfa/0x480 [f2fs] __f2fs_set_acl+0x19b/0x330 [f2fs] __vfs_removexattr+0x52/0x70 __vfs_removexattr_locked+0xb1/0x140 vfs_removexattr+0x56/0x100 removexattr+0x57/0x80 path_removexattr+0xa3/0xc0 __x64_sys_removexattr+0x17/0x20 do_syscall_64+0x37/0xb0 entry_SYSCALL_64_after_hwframe+0x44/0xae The root cause is in __f2fs_setxattr(), we missed to do sanity check on last xattr entry, result in out-of-bound memory access during updating inconsistent xattr data of target inode. After the fix, it can detect such xattr inconsistency as below: F2FS-fs (loop11): inode (7) has invalid last xattr entry, entry_size: 60676 F2FS-fs (loop11): inode (8) has corrupted xattr F2FS-fs (loop11): inode (8) has corrupted xattr F2FS-fs (loop11): inode (8) has invalid last xattr entry, entry_size: 47736 Cc: stable@vger.kernel.org Reported-by: Wenqing Liu <wenqingliu0120@gmail.com> Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-01-04f2fs: do not bother checkpoint by f2fs_get_node_infoJaegeuk Kim
This patch tries to mitigate lock contention between f2fs_write_checkpoint and f2fs_get_node_info along with nat_tree_lock. The idea is, if checkpoint is currently running, other threads that try to grab nat_tree_lock would be better to wait for checkpoint. Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-01-04f2fs: avoid down_write on nat_tree_lock during checkpointJaegeuk Kim
Let's cache nat entry if there's no lock contention only. Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-01-04Merge tag 'clk-v5.17-samsung' of ↵Stephen Boyd
https://git.kernel.org/pub/scm/linux/kernel/git/snawrocki/clk into clk-samsung Pull Samsung clk driver updates from Sylwester Nawrocki: - removal of all remaining uses of __clk_lookup() in drivers/clk/samsung - refactoring of the CPU clocks registration to use common interface - an update of the Exynos850 driver (support for more clock domains) required by the E850-96 development board - initial clock driver for the Exynos7885 SoC (Samsung Galaxy A8) * tag 'clk-v5.17-samsung' of https://git.kernel.org/pub/scm/linux/kernel/git/snawrocki/clk: clk: samsung: Add initial Exynos7885 clock driver clk: samsung: clk-pll: Add support for pll1417x clk: samsung: Make exynos850_register_cmu shared dt-bindings: clock: Document Exynos7885 CMU bindings dt-bindings: clock: Add bindings definitions for Exynos7885 CMU clk: samsung: exynos850: Add missing sysreg clocks dt-bindings: clock: Add bindings for Exynos850 sysreg clocks clk: samsung: exynos850: Register clocks early clk: samsung: exynos850: Keep some crucial clocks running clk: samsung: exynos850: Implement CMU_CMGP domain dt-bindings: clock: Add bindings for Exynos850 CMU_CMGP clk: samsung: exynos850: Implement CMU_APM domain dt-bindings: clock: Add bindings for Exynos850 CMU_APM clk: samsung: Update CPU clk registration clk: samsung: Remove meaningless __init and extern from header files clk: samsung: remove __clk_lookup() usage dt-bindings: clock: samsung: add IDs for some core clocks
2022-01-04ACPI: PCC: Implement OperationRegion handler for the PCC Type 3 subtypeSudeep Holla
PCC OpRegion provides a mechanism to communicate with the platform directly from the AML. PCCT provides the list of PCC channel available in the platform, a subset or all of them can be used in PCC Opregion. This patch registers the PCC OpRegion handler before ACPI tables are loaded. This relies on the special context data passed to identify and set up the PCC channel before the OpRegion handler is executed for the first time. Typical PCC Opregion declaration looks like this: OperationRegion (PFRM, PCC, 2, 0x74) Field (PFRM, ByteAcc, NoLock, Preserve) { SIGN, 32, FLGS, 32, LEN, 32, CMD, 32, DATA, 800 } It contains four named double words followed by 100 bytes of buffer names DATA. ASL can fill out the buffer something like: /* Create global or local buffer */ Name (BUFF, Buffer (0x0C){}) /* Create double word fields over the buffer */ CreateDWordField (BUFF, 0x0, WD0) CreateDWordField (BUFF, 0x04, WD1) CreateDWordField (BUFF, 0x08, WD2) /* Fill the named fields */ WD0 = 0x50434300 SIGN = BUFF WD0 = 1 FLGS = BUFF WD0 = 0x10 LEN = BUFF /* Fill the payload in the DATA buffer */ WD0 = 0 WD1 = 0x08 WD2 = 0 DATA = BUFF /* Write to CMD field to trigger handler */ WD0 = 0x4404 CMD = BUFF This buffer is received by acpi_pcc_opregion_space_handler. This handler will fetch the complete buffer via internal_pcc_buffer. The setup handler will receive the special PCC context data which will contain the PCC channel index which used to set up the channel. The buffer pointer and length is saved in region context which is then used in the handler. (kernel test robot: Build failure with CONFIG_ACPI_DEBUGGER) Link: https://lore.kernel.org/r/202201041539.feAV0l27-lkp@intel.com Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-01-04ieee802154: atusb: fix uninit value in atusb_set_extended_addrPavel Skripkin
Alexander reported a use of uninitialized value in atusb_set_extended_addr(), that is caused by reading 0 bytes via usb_control_msg(). Fix it by validating if the number of bytes transferred is actually correct, since usb_control_msg() may read less bytes, than was requested by caller. Fail log: BUG: KASAN: uninit-cmp in ieee802154_is_valid_extended_unicast_addr include/linux/ieee802154.h:310 [inline] BUG: KASAN: uninit-cmp in atusb_set_extended_addr drivers/net/ieee802154/atusb.c:1000 [inline] BUG: KASAN: uninit-cmp in atusb_probe.cold+0x29f/0x14db drivers/net/ieee802154/atusb.c:1056 Uninit value used in comparison: 311daa649a2003bd stack handle: 000000009a2003bd ieee802154_is_valid_extended_unicast_addr include/linux/ieee802154.h:310 [inline] atusb_set_extended_addr drivers/net/ieee802154/atusb.c:1000 [inline] atusb_probe.cold+0x29f/0x14db drivers/net/ieee802154/atusb.c:1056 usb_probe_interface+0x314/0x7f0 drivers/usb/core/driver.c:396 Fixes: 7490b008d123 ("ieee802154: add support for atusb transceiver") Reported-by: Alexander Potapenko <glider@google.com> Acked-by: Alexander Aring <aahringo@redhat.com> Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Link: https://lore.kernel.org/r/20220104182806.7188-1-paskripkin@gmail.com Signed-off-by: Stefan Schmidt <stefan@datenfreihafen.org>
2022-01-04dm space map common: add bounds check to sm_ll_lookup_bitmap()Joe Thornber
Corrupted metadata could warrant returning error from sm_ll_lookup_bitmap(). Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2022-01-04dm btree: add a defensive bounds check to insert_at()Joe Thornber
Corrupt metadata could trigger an out of bounds write. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2022-01-04dm btree remove: change a bunch of BUG_ON() calls to proper errorsJoe Thornber
Abuse of BUG_ON() is never appropriate, best to propagate errors to fail gracefully (rather than take the entire system down). Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2022-01-04truncate: Add truncate_cleanup_folio()Matthew Wilcox (Oracle)
Convert both callers of truncate_cleanup_page() to use truncate_cleanup_folio() instead. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Add filemap_release_folio()Matthew Wilcox (Oracle)
Reimplement try_to_release_page() as a wrapper around filemap_release_folio(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Use a folio in filemap_page_mkwriteMatthew Wilcox (Oracle)
This fixes a bug for tail pages. They always have a NULL mapping, so the check would fail and we would never mark the folio as dirty. Ends up growing the kernel by 19 bytes although there will be fewer calls to compound_head() dynamically. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Use a folio in filemap_map_pagesMatthew Wilcox (Oracle)
Saves 61 bytes due to fewer calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Use folios in next_uptodate_pageMatthew Wilcox (Oracle)
This saves 105 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert page_cache_delete_batch to foliosMatthew Wilcox (Oracle)
Saves one call to compound_head() and reduces text size by 15 bytes. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert filemap_get_pages to use foliosMatthew Wilcox (Oracle)
This saves a few calls to compound_head(), including one in filemap_update_page(). Shrinks the kernel by 78 bytes. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Drop the refcount while waiting for page lockMatthew Wilcox (Oracle)
Commit bd8a1f3655a7 ("mm/filemap: support readpage splitting a page") changed the read_iter path to drop the refcount while waiting for the page lock. However, it missed the same pattern in read_mapping_page() and friends. Use the same pattern in do_read_cache_folio() that is used in filemap_update_page(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Add read_cache_folio and read_mapping_folioMatthew Wilcox (Oracle)
Reimplement read_cache_page() as a wrapper around read_cache_folio(). Saves over 400 bytes of text from do_read_cache_folio() which more than makes up for the extra 100 bytes of text added to the various wrapper functions. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert filemap_fault to folioMatthew Wilcox (Oracle)
Instead of converting back-and-forth between the actual page and the head page, just convert once at the end of the function where we set the vmf->page. Saves 241 bytes of text, or 15% of the size of filemap_fault(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert do_async_mmap_readahead to take a folioMatthew Wilcox (Oracle)
Call page_cache_async_ra() directly instead of indirecting through page_cache_async_readahead(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04readahead: Convert page_cache_ra_unbounded to foliosMatthew Wilcox (Oracle)
This saves 99 bytes of kernel text. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04readahead: Convert page_cache_async_ra() to take a folioMatthew Wilcox (Oracle)
Using the folio here avoids checking whether it's a tail page. This patch mostly just enables some of the following patches. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert filemap_range_uptodate to foliosMatthew Wilcox (Oracle)
The only caller was already passing a head page, so this simply avoids a call to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert filemap_create_page to folioMatthew Wilcox (Oracle)
This is all internal to filemap and saves 100 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert filemap_read_page to take a folioMatthew Wilcox (Oracle)
One of the callers already had a folio; the other two grow by a few bytes, but filemap_read_page() shrinks by 50 bytes for a net reduction of 27 bytes. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert find_get_pages_contig to foliosMatthew Wilcox (Oracle)
None of the callers of find_get_pages_contig() want tail pages. They all use order-0 pages today, but if they were converted, they'd want folios. So just remove the call to find_subpage() instead of replacing it with folio_page(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert filemap_get_read_batch to use foliosMatthew Wilcox (Oracle)
The page cache only stores folios, never tail pages. Saves 29 bytes due to removing calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Remove thp_contains()Matthew Wilcox (Oracle)
This function is now unused, so delete it. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert find_get_entry to return a folioMatthew Wilcox (Oracle)
Convert callers to cope. Saves 580 bytes of kernel text; all five callers are reduced in size. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Add filemap_remove_folio and __filemap_remove_folioMatthew Wilcox (Oracle)
Reimplement __delete_from_page_cache() as a wrapper around __filemap_remove_folio() and delete_from_page_cache() as a wrapper around filemap_remove_folio(). Remove the EXPORT_SYMBOL as delete_from_page_cache() was not used by any in-tree modules. Convert page_cache_free_page() into filemap_free_folio(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert tracing of page cache operations to folioMatthew Wilcox (Oracle)
Pass the folio instead of a page. The page was already implicitly a folio as it accessed page->mapping directly. Add the order of the folio to the tracepoint, as this is important information. Also drop printing the address of the struct page as the pfn provides better information than the struct page address. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Add filemap_unaccount_folio()Matthew Wilcox (Oracle)
Replace unaccount_page_cache_page() with filemap_unaccount_folio(). The bug handling path could be a bit more robust (eg taking into account the mapcounts of tail pages), but it's really never supposed to happen. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Convert page_cache_delete to take a folioMatthew Wilcox (Oracle)
It was already assuming a head page, so this is a straightforward conversion. Convert the one caller to call page_folio(), even though it must currently be passing in a head page. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04filemap: Add folio_put_wait_locked()Matthew Wilcox (Oracle)
Convert all three callers of put_and_wait_on_page_locked() to folio_put_wait_locked(). This shrinks the kernel overall by 19 bytes. filemap_update_page() shrinks by 19 bytes while __migration_entry_wait() is unchanged. folio_put_wait_locked() is 14 bytes smaller than put_and_wait_on_page_locked(), but pmd_migration_entry_wait() grows by 14 bytes. It removes the assumption from pmd_migration_entry_wait() that pages cannot be larger than a PMD (which is true today, but may be interesting to explore in the future). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04mm: Add folio_test_pmd_mappable()Matthew Wilcox (Oracle)
Add a predicate to determine if the folio might be mapped by a PMD entry. If CONFIG_TRANSPARENT_HUGEPAGE is disabled, we know it can't be, even if it's large enough. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04iov_iter: Convert iter_xarray to use foliosMatthew Wilcox (Oracle)
Take advantage of how kmap_local_folio() works to simplify the loop. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04iov_iter: Add copy_folio_to_iter()Matthew Wilcox (Oracle)
This wrapper around copy_page_to_iter() works because copy_page_to_iter() handles compound pages correctly. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2022-01-04dm btree spine: eliminate duplicate le32_to_cpu() in node_check()Joe Thornber
Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2022-01-04dm btree spine: remove extra node_check function declarationJoe Thornber
Should have been removed as part of commit f73e2e70ec48 ("dm btree spine: remove paranoid node_check call in node_prep_for_write()") Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2022-01-04Merge branch kvm-arm64/misc-5.17 into kvmarm-master/nextMarc Zyngier
* kvm-arm64/misc-5.17: : . : Misc fixes and improvements: : - Add minimal support for ARMv8.7's PMU extension : - Constify kvm_io_gic_ops : - Drop kvm_is_transparent_hugepage() prototype : - Drop unused workaround_flags field : - Rework kvm_pgtable initialisation : - Documentation fixes : - Replace open-coded SCTLR_EL1.EE useage with its defined macro : - Sysreg list selftest update to handle PAuth : - Include cleanups : . KVM: arm64: vgic: Replace kernel.h with the necessary inclusions KVM: arm64: Fix comment typo in kvm_vcpu_finalize_sve() KVM: arm64: selftests: get-reg-list: Add pauth configuration KVM: arm64: Fix comment on barrier in kvm_psci_vcpu_on() KVM: arm64: Fix comment for kvm_reset_vcpu() KVM: arm64: Use defined value for SCTLR_ELx_EE KVM: arm64: Rework kvm_pgtable initialisation KVM: arm64: Drop unused workaround_flags vcpu field Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-01-04KVM: arm64: vgic: Replace kernel.h with the necessary inclusionsAndy Shevchenko
arm_vgic.h does not require all the stuff that kernel.h provides. Replace kernel.h inclusion with the list of what is really being used. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220104151940.55399-1-andriy.shevchenko@linux.intel.com
2022-01-04EDAC/i10nm: Release mdev/mbase when failing to detect HBMQiuxu Zhuo
On systems without HBM (High Bandwidth Memory) mdev/mbase are not released/unmapped. Add the code to release mdev/mbase when failing to detect HBM. [Tony: re-word commit message] Cc: <stable@vger.kernel.org> Fixes: c945088384d0 ("EDAC/i10nm: Add support for high bandwidth memory") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/20211224091126.1246-1-qiuxu.zhuo@intel.com
2022-01-04Merge tag 'mac80211-next-for-net-next-2022-01-04' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next Johannes Berg says: ==================== Just a few more changes: - mac80211: allow non-standard VHT MCSes 10/11 - mac80211: add sleepable station iterator for drivers - nl80211: clarify a comment - mac80211: small cleanup to use typed element helpers * tag 'mac80211-next-for-net-next-2022-01-04' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next: mac80211: use ieee80211_bss_get_elem() nl80211: clarify comment for mesh PLINK_BLOCKED state mac80211: Add stations iterator where the iterator function may sleep mac80211: allow non-standard VHT MCS-10/11 ==================== Link: https://lore.kernel.org/r/20220104153403.69749-1-johannes@sipsolutions.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>