summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-05-15btrfs: send: remove the again label inside put_file_data()Qu Wenruo
The again label is here to retry to get the folio for the current index. When triggering that label, there is no advance of the iterator. So it can be replaced by a simple "continue" and remove the again label. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: use BTRFS_PATH_AUTO_FREE in btrfs_insert_inode_extref()David Sterba
This is the trivial pattern for path auto free, initialize at the beginning and free at the end with simple goto -> return conversions. Reviewed-by: Daniel Vacek <neelx@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: use BTRFS_PATH_AUTO_FREE in btrfs_del_inode_extref()David Sterba
This is the trivial pattern for path auto free, initialize at the beginning and free at the end with simple goto -> return conversions. Reviewed-by: Daniel Vacek <neelx@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: use BTRFS_PATH_AUTO_FREE in btrfs_encoded_read_inline()David Sterba
This is the trivial pattern for path auto free, initialize at the beginning and free at the end with simple goto -> return conversions. Reviewed-by: Daniel Vacek <neelx@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: use BTRFS_PATH_AUTO_FREE in can_nocow_extent()David Sterba
This is the trivial pattern for path auto free, initialize at the beginning and free at the end with simple goto -> return conversions. Reviewed-by: Daniel Vacek <neelx@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: use BTRFS_PATH_AUTO_FREE in btrfs_set_inode_index_count()David Sterba
This is the trivial pattern for path auto free, initialize at the beginning and free at the end with simple goto -> return conversions. Reviewed-by: Daniel Vacek <neelx@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: use BTRFS_PATH_AUTO_FREE in may_destroy_subvol()David Sterba
This is the trivial pattern for path auto free, initialize at the beginning and free at the end with simple goto -> return conversions. Reviewed-by: Daniel Vacek <neelx@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: do more trivial BTRFS_PATH_AUTO_FREE conversionsDavid Sterba
The most trivial pattern for the auto freeing when the variable is declared with the macro and the final btrfs_free_path() is removed. There are almost none goto -> return conversions and there's no other function cleanup. Reviewed-by: Daniel Vacek <neelx@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: remove redundant record start offset check at test_range_bit()Filipe Manana
It's pointless to check if the current record's start offset is greater than the end offset, as before we just tested if it was greater than the start offset - and if it's not it means it's less than or equal to the start offset, so it can not be greater than the end offset, as our start offset is always smaller than the end offset. So remove that check and also add an assertion to verify the start offset is smaller then the end offset. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: simplify last record detection at test_range_bit()Filipe Manana
The overflow detection for the start offset of the next record is not really necessary, we can just stop iterating if the current record ends at or after out end offset. This removes the need to test if the current record end offset is (u64)-1 and to check if adding 1 to the current end offset results in 0. By testing only if the current record ends at or after the end offset, we also don't need anymore to test the new start offset at the head of the while loop. This makes both the source code and assembly code simpler, more efficient and shorter (reducing the object text size). Also remove the pointless initialization to NULL of the state variable, as we don't use it before the first assignment to it. This may help avoid some warnings with clang tools such as the one reported/fixed by commit 966de47ff0c9 ("btrfs: remove redundant initialization of variables in log_new_ancestors"). Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: remove redundant check at find_first_extent_bit_state()Filipe Manana
The tree_search() function always returns an entry that either contains the search offset or the first entry in the tree that starts after the offset. So checking at find_first_extent_bit_state() if the returned entry ends at or after the search offset is pointless. Remove the check. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: fix documentation for tree_search_for_insert()Filipe Manana
There are several things wrong with the documentation: 1) At the top it's only mentioned that we search for an entry containing the given offset, but when such entry does not exists we search for the first entry that starts and ends after that offset; 2) It mentions that @node_ret and @parent_ret aren't changed if the returned entry contains the given offset - that is true only if the returned entry starts exactly at @offset, otherwise those arguments are changed; 3) It mentions that if no entry containing offset is found then we return the first entry ending before the offset - that is not true, we return the first entry that starts and ends after that offset; 4) It also mentions that NULL is never returned. This is false as in case there's no entry containing offset or any entry that starts and ends after offset, NULL is returned. So fix the documentation. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: simplify last record detection at test_range_bit_exists()Filipe Manana
Instead of keeping track of the minimum start offset of the next record and detecting overflow every time we update that offset to be the sum of current record's end offset plus one, we can simply exit when the current record ends at or beyond our end offset and forget about updating the start offset on every iteration and testing for it at the top of the loop. This makes both the source code and assembly code simpler, more efficient and shorter (reducing the object text size). Also remove the pointless initialization to NULL of the state variable, as we don't use it before the first assignment to it. This may help avoid some warnings with clang tools such as the one reported/fixed by commit 966de47ff0c9 ("btrfs: remove redundant initialization of variables in log_new_ancestors"). Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: use clear_extent_bits() instead of clear_extent_bit() where possibleFilipe Manana
Several places are using clear_extent_bit() and passing a NULL value for the 'cached' argument, which is pointless as they can use instead clear_extent_bits(). Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: use clear_extent_bits() at chunk_map_device_clear_bits()Filipe Manana
Instead of using __clear_extent_bit() we can use clear_extent_bits() since we pass a NULL value for the cached and changeset arguments. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: use clear_extent_bit() at try_release_extent_state()Filipe Manana
Instead of using __clear_extent_bit() we can use clear_extent_bit() since we pass a NULL value for the changeset argument. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: fix the file offset calculation inside btrfs_decompress_buf2page()Qu Wenruo
[BUG WITH EXPERIMENTAL LARGE FOLIOS] When testing the experimental large data folio support with compression, there are several ASSERT()s triggered from btrfs_decompress_buf2page() when running fsstress with compress=zstd mount option: - ASSERT(copy_len) from btrfs_decompress_buf2page() - VM_BUG_ON(offset + len > PAGE_SIZE) from memcpy_to_page() [CAUSE] Inside btrfs_decompress_buf2page(), we need to grab the file offset from the current bvec.bv_page, to check if we even need to copy data into the bio. And since we're using single page bvec, and no large folio, every page inside the folio should have its index properly setup. But when large folios are involved, only the first page (aka, the head page) of a large folio has its index properly initialized. The other pages inside the large folio will not have their indexes properly initialized. Thus the page_offset() call inside btrfs_decompress_buf2page() will result garbage, and completely screw up the @copy_len calculation. [FIX] Instead of using page->index directly, go with page_pgoff(), which can handle non-head pages correctly. So introduce a helper, file_offset_from_bvec(), to get the file offset from a single page bio_vec, so the copy_len calculation can be done correctly. Reviewed-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Sweet Tea Dorminy <sweettea-kernel@dorminy.me> Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: use rb_entry_safe() where possible to simplify codeDavid Sterba
Simplify conditionally reading an rb_entry(), there's the rb_entry_safe() helper that checks the node pointer for NULL so we don't have to write it explicitly. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: pass a pointer to get_range_bits() to cache first search resultFilipe Manana
Allow get_range_bits() to take an extent state pointer to pointer argument so that we can cache the first extent state record in the target range, so that a caller can use it for subsequent operations without doing a full tree search. Currently the only user is try_release_extent_state(), which then does a call to __clear_extent_bit() which can use such a cached state record. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: allow folios to be released while ordered extent is finishingFilipe Manana
When the release_folio callback (from struct address_space_operations) is invoked we don't allow the folio to be released if its range is currently locked in the inode's io_tree, as it may indicate the folio may be needed by the task that locked the range. However if the range is locked because an ordered extent is finishing, then we can safely allow the folio to be released because ordered extent completion doesn't need to use the folio at all. When we are under memory pressure, the kernel starts writeback of dirty pages (folios) with the goal of releasing the pages from the page cache after writeback completes, however this often is not possible on btrfs because: * Once the writeback completes we queue the ordered extent completion; * Once the ordered extent completion starts, we lock the range in the inode's io_tree (at btrfs_finish_one_ordered()); * If the release_folio callback is called while the folio's range is locked in the inode's io_tree, we don't allow the folio to be released, so the kernel has to try to release memory elsewhere, which may result in triggering more writeback or releasing other pages from the page cache which may be more useful to have around for applications. In contrast, when the release_folio callback is invoked after writeback finishes and before ordered extent completion starts or locks the range, we allow the folio to be released, as well as when the release_folio callback is invoked after ordered extent completion unlocks the range. Improve on this by detecting if the range is locked for ordered extent completion and if it is, allow the folio to be released. This detection is achieved by adding a new extent flag in the io_tree that is set when the range is locked during ordered extent completion. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: update comment for try_release_extent_state()Filipe Manana
Drop reference to pages from the comment since the function is fully folio aware and works regardless of how many pages are in the folio. Also while at it, capitalize the first word and make it more explicit that release_folio is a callback from struct address_space_operations. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: prepare btrfs_punch_hole_lock_range() for large data foliosQu Wenruo
The function btrfs_punch_hole_lock_range() needs to make sure there is no other folio in the range, thus it goes with filemap_range_has_page(), which works pretty fine. But if we have large folios, under the following case filemap_range_has_page() will always return true, forcing btrfs_punch_hole_lock_range() to do a very time consuming busy loop: start end | | |//|//|//|//| | | | | | | | |//|//| \ / \ / Folio A Folio B In the above case, folio A and B contain our start/end indexes, and there are no other folios in the range. Thus we do not need to retry inside btrfs_punch_hole_lock_range(). To prepare for large data folios, introduce a helper, check_range_has_page(), which will: - Shrink the search range towards page boundaries If the rounded down end (exclusive, otherwise it can underflow when @end is inside the folio at file offset 0) is no larger than the rounded up start, it means the range contains no other pages other than the ones covering @start and @end. Can return false directly in that case. - Grab all the folios inside the range - Skip any large folios that cover the start and end indexes - If any other folios are found return true - Otherwise return false This new helper is going to handle both large folios and regular ones. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: prepare btrfs_buffered_write() for large data foliosQu Wenruo
This involves the following modifications: - Set the order flags for __filemap_get_folio() inside prepare_one_folio() This will allow __filemap_get_folio() to create a large folio if the address space supports it. - Limit the initial @write_bytes inside copy_one_range() If the largest folio boundary splits the initial write range, there is no way we can write beyond the largest folio boundary. This is done by a simple helper calc_write_bytes(). - Release exceeding reserved space if the folio is smaller than expected Which is doing the same handling when short copy happens. All the preparations should not change the behavior when the largest folio order is 0. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: refactor how we handle reserved space inside copy_one_range()Qu Wenruo
There are several things not ideal in copy_one_range(): - Unnecessary temporary variables * block_offset * reserve_bytes * dirty_blocks * num_blocks * release_bytes These are utilized to handle short-copy cases. - Inconsistent handling of btrfs_delalloc_release_extents() There is a hidden behavior that, after reserving metadata for X bytes of data write, we have to call btrfs_delalloc_release_extents() with X once and only once. Calling btrfs_delalloc_release_extents(X - 4K) and btrfs_delalloc_release_extents(4K) will cause outstanding extents accounting to go wrong. This is because the outstanding extents mechanism is not designed to handle shrinking of reserved space. Improve above situations by: - Use a single @reserved_start and @reserved_len pair Now we reserve space for the initial range, and if a short copy happened and we need to shrink the reserved space, we can easily calculate the new length, and update @reserved_len. - Introduce helpers to shrink reserved data and metadata space This is done by two new helpers, shrink_reserved_space() and btrfs_delalloc_shrink_extents(). The later will do a better calculation if we need to modify the outstanding extents, and the first one will be utilized inside copy_one_range(). - Manually unlock, release reserved space and return if no byte is copied Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: remove EXTENT_UPTODATE io tree flagFilipe Manana
The EXTENT_UPTODATE io tree flag is now used only to mark ranges in the fs_info->excluded_extents as used by super blocks and not available for extent allocation (to prevent adding those ranges as free space in the in memory space caches). As we can use any flag for that purpose, and we are using EXTENT_DIRTY for the pinned extents io tree for example, remove the EXTENT_UPTODATE flag and use instead EXTENT_DIRTY for the excluded extents io tree. Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: stop searching for EXTENT_DIRTY bit in the excluded extents io treeFilipe Manana
At btrfs_add_new_free_space() we keep searching for ranges in the excluded extents io tree that have the EXTENT_DIRTY bit set, however we never ever set that bit for ranges in that tree. That is a leftover from when that function used the global freed extents trees (fs_info->freed_extents[2]), where we used both the EXTENT_DIRTY and EXTENT_UPTODATE bits, but those trees are gone with commit fe119a6eeb67 ("btrfs: switch to per-transaction pinned extents"), which introduced the fs_info->excluded_extents io tree, where only EXTENT_UPTODATE is set. So remove the EXTENT_DIRTY bit search at btrfs_add_new_free_space(). Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: remove leftover EXTENT_UPTODATE clear from an inode's io_treeFilipe Manana
After commit 52b029f42751 ("btrfs: remove unnecessary EXTENT_UPTODATE state in buffered I/O path") we never set EXTENT_UPTODATE in an inode's io_tree anymore, but we still have some code attempting to clear that bit from an inode's io_tree. Remove that code as it doesn't do anything anymore. The sole use of the EXTENT_UPTODATE bit is for the excluded extents io_tree (fs_info->excluded_extents), which is used to track the locations of super blocks, so that their ranges are never marked as free, making them unavailable for extent allocation. Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: fix fsync of files with no hard links not persisting deletionFilipe Manana
If we fsync a file (or directory) that has no more hard links, because while a process had a file descriptor open on it, the file's last hard link was removed and then the process did an fsync against the file descriptor, after a power failure or crash the file still exists after replaying the log. This behaviour is incorrect since once an inode has no more hard links it's not accessible anymore and we insert an orphan item into its subvolume's tree so that the deletion of all its items is not missed in case of a power failure or crash. So after log replay the file shouldn't exist anymore, which is also the behaviour on ext4, xfs, f2fs and other filesystems. Fix this by not ignoring inodes with zero hard links at btrfs_log_inode_parent() and by committing an inode's delayed inode when we are not doing a fast fsync (either BTRFS_INODE_COPY_EVERYTHING or BTRFS_INODE_NEEDS_FULL_SYNC is set in the inode's runtime flags). This last step is necessary because when removing the last hard link we don't delete the corresponding ref (or extref) item, instead we record the change in the inode's delayed inode with the BTRFS_DELAYED_NODE_DEL_IREF flag, so that when the delayed inode is committed we delete the ref/extref item from the inode's subvolume tree - otherwise the logging code will log the last hard link and therefore upon log replay the inode is not deleted. The base code for a fstests test case that reproduces this bug is the following: . ./common/dmflakey _require_scratch _require_dm_target flakey _require_mknod _scratch_mkfs >>$seqres.full 2>&1 || _fail "mkfs failed" _require_metadata_journaling $SCRATCH_DEV _init_flakey _mount_flakey touch $SCRATCH_MNT/foo # Commit the current transaction and persist the file. _scratch_sync # A fifo to communicate with a background xfs_io process that will # fsync the file after we deleted its hard link while it's open by # xfs_io. mkfifo $SCRATCH_MNT/fifo tail -f $SCRATCH_MNT/fifo | \ $XFS_IO_PROG $SCRATCH_MNT/foo >>$seqres.full & XFS_IO_PID=$! # Give some time for the xfs_io process to open a file descriptor for # the file. sleep 1 # Now while the file is open by the xfs_io process, delete its only # hard link. rm -f $SCRATCH_MNT/foo # Now that it has no more hard links, make the xfs_io process fsync it. echo "fsync" > $SCRATCH_MNT/fifo # Terminate the xfs_io process so that we can unmount. echo "quit" > $SCRATCH_MNT/fifo wait $XFS_IO_PID unset XFS_IO_PID # Simulate a power failure and then mount again the filesystem to # replay the journal/log. _flakey_drop_and_remount # We don't expect the file to exist anymore, since it was fsynced when # it had no more hard links. [ -f $SCRATCH_MNT/foo ] && echo "file foo still exists" _unmount_flakey # success, all done echo "Silence is golden" status=0 exit A test case for fstests will be submitted soon. Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: fix typo in space info explanationMark Harmstone
There's an explanation of how space info works at the top of fs/btrfs/space-info.c, which makes reference to a variable called bytes_may_reserve. There's nothing called that in the code, and wasn't at time the comment was written; as far I can tell this is a typo, and it should actually be bytes_may_use. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Mark Harmstone <maharmstone@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: remove unused flag EXTENT_BUFFER_IN_TREEDaniel Vacek
This flag is set after inserting the eb to the buffer tree and cleared on it's removal. It was added in commit 34b41acec1ccc0 ("Btrfs: use a bit to track if we're in the radix tree") and wanted to make use of it, faa2dbf004e89e ("Btrfs: add sanity tests for new qgroup accounting code"). Both are 10+ years old, we can remove the flag. Signed-off-by: Daniel Vacek <neelx@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: remove unused flag EXTENT_BUFFER_CORRUPTDaniel Vacek
This flag is no longer being used. It was added by commit a826d6dcb32d ("Btrfs: check items for correctness as we search") but it's no longer being used after commit f26c92386028 ("btrfs: remove reada infrastructure"). Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Daniel Vacek <neelx@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: remove unused flag EXTENT_BUFFER_READAHEADDaniel Vacek
This flag is no longer being used. It was added by commit ab0fff03055d ("btrfs: add READAHEAD extent buffer flag") and used in commits: 79fb65a1f6d9 ("Btrfs: don't call readahead hook until we have read the entire eb") 78e62c02abb9 ("btrfs: Remove extent_io_ops::readpage_io_failed_hook") 371cdc0700c7 ("btrfs: introduce subpage metadata validation check") Finally all the code using it was removed by commit f26c92386028 ("btrfs: remove reada infrastructure"). Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Daniel Vacek <neelx@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: remove unused flag EXTENT_BUFFER_READ_ERRDaniel Vacek
This flag was added by commit 656f30dba7ab ("Btrfs: be aware of btree inode write errors to avoid fs corruption") but it stopped being used after commit 046b562b20a5 ("btrfs: use a separate end_io handler for read_extent_buffer"). Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Daniel Vacek <neelx@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: factor out the main loop of btrfs_buffered_write() into a helperQu Wenruo
Inside the main loop of btrfs_buffered_write() we are doing a lot of heavy lifting inside a while() loop. This makes it pretty hard to read, factor out the content into a helper, copy_one_range() to do the work. This has no functional change, but with some minor variable renames, e.g. rename all "sector" into "block". Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: factor out space reservation code from btrfs_buffered_write()Qu Wenruo
Inside the main loop of btrfs_buffered_write(), we have a complex data and metadata space reservation code, which tries to reserve space for a COW write, if failed then fallback to check if we can do a NOCOW write. Factor out that part of code into a dedicated helper, reserve_space(), to make the main loop a little easier to read. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: cleanup the reserved space inside loop of btrfs_buffered_write()Qu Wenruo
Inside the main loop of btrfs_buffered_write(), if something wrong happened, there is a out-of-loop cleanup path to release the reserved space. This behavior saves some code lines, but makes it much harder to read, as we need to check release_bytes to make sure when we need to do the cleanup. Factor out the cleanup part into a helper, release_reserved_space(), to do the cleanup inside the main loop, so that we can move @release_bytes inside the loop. This will make later refactoring of the main loop much easier. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: remove force_page_uptodate variable from btrfs_buffered_write()Qu Wenruo
Commit c87c299776e4 ("btrfs: make buffered write to copy one page a time") changed how the variable @force_page_uptodate was updated. Before that commit the variable was only initialized to false at the beginning of the function, and after hitting a short copy, the next retry on the same folio would force the folio to be read from the disk. But after the commit, the variable is always initialized to false at the beginning of the loop's scope, causing prepare_one_folio() never to get a true value passed in. The change in behavior is not a huge deal, it only makes a difference on how we handle short copies: Old: Allow the buffer to be split The first short copy will be rejected, that's the same for both cases. But for the next retry, we require the folio to be read from disk. Then even if we hit a short copy again, since the folio is already uptodate, we do not need to handle partial uptodate range, and can continue, marking the short copied range as dirty and continue. This will split the buffer write into the folio as two buffered writes. New: Do not allow the buffer to be split The first short copy will be rejected, that's the same for both cases. For the next retry, we do nothing special, thus if the short copy happened again, we reject it again, until either the short copy is gone, or we failed to fault in the buffer. This will mean the buffer write into the folio will either fail or succeed, no splitting will happen. To me, either solution is fine, but the new one makes it simpler and requires no special handling, so I prefer that solution. And since @force_page_uptodate is always false when passed into prepare_one_folio(), we can just remove the variable. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-15btrfs: move block perfect compression out of experimental featuresQu Wenruo
Commit 1d2fbb7f1f9e ("btrfs: allow compression even if the range is not page aligned") introduced the block perfect compression for block size < page size cases. Before that commit, if the fs block size is smaller than page size (aka subpage cases), compressed write is only enabled if the dirty range is fully page aligned. This block perfect compression support was introduced in v6.13, and has been tested for two kernel releases. I believe it's time to move it out of experimental features so that we can get more tests in the real world. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2025-05-14Merge tag 'kbuild-fixes-v6.15' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild fixes from Masahiro Yamada: - Add proper pahole version dependency to CONFIG_GENDWARFKSYMS to avoid module loading errors - Fix UAPI header tests for the OpenRISC architecture - Add dependency on the libdw package in Debian and RPM packages - Disable -Wdefault-const-init-unsafe warnings on Clang - Make "make clean ARCH=um" also clean the arch/x86/ directory - Revert the use of -fmacro-prefix-map=, which causes issues with debugger usability * tag 'kbuild-fixes-v6.15' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: kbuild: fix typos "module.builtin" to "modules.builtin" Revert "kbuild, rust: use -fremap-path-prefix to make paths relative" Revert "kbuild: make all file references relative to source root" kbuild: fix dependency on sorttable init: remove unused CONFIG_CC_CAN_LINK_STATIC um: let 'make clean' properly clean underlying SUBARCH as well kbuild: Disable -Wdefault-const-init-unsafe kbuild: rpm-pkg: Add (elfutils-devel or libdw-devel) to BuildRequires kbuild: deb-pkg: Add libdw-dev:native to Build-Depends-Arch usr/include: openrisc: don't HDRTEST bpf_perf_event.h kbuild: Require pahole <v1.28 or >v1.29 with GENDWARFKSYMS on X86
2025-05-14Merge tag 'tpmdd-next-6.15-rc7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd Pull tpm fixes from Jarkko Sakkinen: "A few last minute fixes for v6.15" * tag 'tpmdd-next-6.15-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd: tpm: tis: Double the timeout B to 4s char: tpm: tpm-buf: Add sanity check fallback in read helpers tpm: Mask TPM RC in tpm2_start_auth_session()
2025-05-15tpm: tis: Double the timeout B to 4sMichal Suchanek
With some Infineon chips the timeouts in tpm_tis_send_data (both B and C) can reach up to about 2250 ms. Timeout C is retried since commit de9e33df7762 ("tpm, tpm_tis: Workaround failed command reception on Infineon devices") Timeout B still needs to be extended. The problem is most commonly encountered with context related operation such as load context/save context. These are issued directly by the kernel, and there is no retry logic for them. When a filesystem is set up to use the TPM for unlocking the boot fails, and restarting the userspace service is ineffective. This is likely because ignoring a load context/save context result puts the real TPM state and the TPM state expected by the kernel out of sync. Chips known to be affected: tpm_tis IFX1522:00: 2.0 TPM (device-id 0x1D, rev-id 54) Description: SLB9672 Firmware Revision: 15.22 tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 22) Firmware Revision: 7.83 tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1A, rev-id 16) Firmware Revision: 5.63 Link: https://lore.kernel.org/linux-integrity/Z5pI07m0Muapyu9w@kitsune.suse.cz/ Signed-off-by: Michal Suchanek <msuchanek@suse.de> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2025-05-15char: tpm: tpm-buf: Add sanity check fallback in read helpersPurva Yeshi
Fix Smatch-detected issue: drivers/char/tpm/tpm-buf.c:208 tpm_buf_read_u8() error: uninitialized symbol 'value'. drivers/char/tpm/tpm-buf.c:225 tpm_buf_read_u16() error: uninitialized symbol 'value'. drivers/char/tpm/tpm-buf.c:242 tpm_buf_read_u32() error: uninitialized symbol 'value'. Zero-initialize the return values in tpm_buf_read_u8(), tpm_buf_read_u16(), and tpm_buf_read_u32() to guard against uninitialized data in case of a boundary overflow. Add defensive initialization ensures the return values are always defined, preventing undefined behavior if the unexpected happens. Signed-off-by: Purva Yeshi <purvayeshi550@gmail.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2025-05-15tpm: Mask TPM RC in tpm2_start_auth_session()Jarkko Sakkinen
tpm2_start_auth_session() does not mask TPM RC correctly from the callers: [ 28.766528] tpm tpm0: A TPM error (2307) occurred start auth session Process TPM RCs inside tpm2_start_auth_session(), and map them to POSIX error codes. Cc: stable@vger.kernel.org # v6.10+ Fixes: 699e3efd6c64 ("tpm: Add HMAC session start and end functions") Reported-by: Herbert Xu <herbert@gondor.apana.org.au> Closes: https://lore.kernel.org/linux-integrity/Z_NgdRHuTKP6JK--@gondor.apana.org.au/ Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
2025-05-14Merge tag 'for-6.15-rc6-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull btrfs fixes from David Sterba: - fix potential endless loop when discarding a block group when disabling discard - reinstate message when setting a large value of mount option 'commit' - fix a folio leak when async extent submission fails * tag 'for-6.15-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: btrfs: add back warning for mount option commit values exceeding 300 btrfs: fix folio leak in submit_one_async_extent() btrfs: fix discard worker infinite loop after disabling discard
2025-05-14Merge tag 'trace-v6.15-rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace Pull tracing fixes from Steven Rostedt: - Fix sample code that uses trace_array_printk() The sample code for in kernel use of trace_array (that creates an instance for use within the kernel) and shows how to use trace_array_printk() that writes into the created instance, used trace_printk_init_buffers(). But that function is used to initialize normal trace_printk() and produces the NOTICE banner which is not needed for use of trace_array_printk(). The function to initialize that is trace_array_init_printk() that takes the created trace array instance as a parameter. Update the sample code to reflect the proper usage. - Fix preemption count output for stacktrace event The tracing buffer shows the preempt count level when an event executes. Because writing the event itself disables preemption, this needs to be accounted for when recording. The stacktrace event did not account for this so the output of the stacktrace event showed preemption was disabled while the event that triggered the stacktrace shows preemption is enabled and this leads to confusion. Account for preemption being disabled for the stacktrace event. The same happened for stack traces triggered by function tracer. - Fix persistent ring buffer when trace_pipe is used The ring buffer swaps the reader page with the next page to read from the write buffer when trace_pipe is used. If there's only a page of data in the ring buffer, this swap will cause the "commit" pointer (last data written) to be on the reader page. If more data is written to the buffer, it is added to the reader page until it falls off back into the write buffer. If the system reboots and the commit pointer is still on the reader page, even if new data was written, the persistent buffer validator will miss finding the commit pointer because it only checks the write buffer and does not check the reader page. This causes the validator to fail the validation and clear the buffer, where the new data is lost. There was a check for this, but it checked the "head pointer", which was incorrect, because the "head pointer" always stays on the write buffer and is the next page to swap out for the reader page. Fix the logic to catch this case and allow the user to still read the data after reboot. * tag 'trace-v6.15-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: ring-buffer: Fix persistent buffer when commit page is the reader page ftrace: Fix preemption accounting for stacktrace filter command ftrace: Fix preemption accounting for stacktrace trigger command tracing: samples: Initialize trace_array_printk() with the correct function
2025-05-14ring-buffer: Fix persistent buffer when commit page is the reader pageSteven Rostedt
The ring buffer is made up of sub buffers (sometimes called pages as they are by default PAGE_SIZE). It has the following "pages": "tail page" - this is the page that the next write will write to "head page" - this is the page that the reader will swap the reader page with. "reader page" - This belongs to the reader, where it will swap the head page from the ring buffer so that the reader does not race with the writer. The writer may end up on the "reader page" if the ring buffer hasn't written more than one page, where the "tail page" and the "head page" are the same. The persistent ring buffer has meta data that points to where these pages exist so on reboot it can re-create the pointers to the cpu_buffer descriptor. But when the commit page is on the reader page, the logic is incorrect. The check to see if the commit page is on the reader page checked if the head page was the reader page, which would never happen, as the head page is always in the ring buffer. The correct check would be to test if the commit page is on the reader page. If that's the case, then it can exit out early as the commit page is only on the reader page when there's only one page of data in the buffer. There's no reason to iterate the ring buffer pages to find the "commit page" as it is already found. To trigger this bug: # echo 1 > /sys/kernel/tracing/instances/boot_mapped/events/syscalls/sys_enter_fchownat/enable # touch /tmp/x # chown sshd /tmp/x # reboot On boot up, the dmesg will have: Ring buffer meta [0] is from previous boot! Ring buffer meta [1] is from previous boot! Ring buffer meta [2] is from previous boot! Ring buffer meta [3] is from previous boot! Ring buffer meta [4] commit page not found Ring buffer meta [5] is from previous boot! Ring buffer meta [6] is from previous boot! Ring buffer meta [7] is from previous boot! Where the buffer on CPU 4 had a "commit page not found" error and that buffer is cleared and reset causing the output to be empty and the data lost. When it works correctly, it has: # cat /sys/kernel/tracing/instances/boot_mapped/trace_pipe <...>-1137 [004] ..... 998.205323: sys_enter_fchownat: __syscall_nr=0x104 (260) dfd=0xffffff9c (4294967196) filename=(0xffffc90000a0002c) user=0x3e8 (1000) group=0xffffffff (4294967295) flag=0x0 (0 Cc: stable@vger.kernel.org Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/20250513115032.3e0b97f7@gandalf.local.home Fixes: 5f3b6e839f3ce ("ring-buffer: Validate boot range memory events") Reported-by: Tasos Sahanidis <tasos@tasossah.com> Tested-by: Tasos Sahanidis <tasos@tasossah.com> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-14ftrace: Fix preemption accounting for stacktrace filter commandpengdonglin
The preemption count of the stacktrace filter command to trace ksys_read is consistently incorrect: $ echo ksys_read:stacktrace > set_ftrace_filter <...>-453 [004] ...1. 38.308956: <stack trace> => ksys_read => do_syscall_64 => entry_SYSCALL_64_after_hwframe The root cause is that the trace framework disables preemption when invoking the filter command callback in function_trace_probe_call: preempt_disable_notrace(); probe_ops->func(ip, parent_ip, probe_opsbe->tr, probe_ops, probe->data); preempt_enable_notrace(); Use tracing_gen_ctx_dec() to account for the preempt_disable_notrace(), which will output the correct preemption count: $ echo ksys_read:stacktrace > set_ftrace_filter <...>-410 [006] ..... 31.420396: <stack trace> => ksys_read => do_syscall_64 => entry_SYSCALL_64_after_hwframe Cc: stable@vger.kernel.org Fixes: 36590c50b2d07 ("tracing: Merge irqflags + preempt counter.") Link: https://lore.kernel.org/20250512094246.1167956-2-dolinux.peng@gmail.com Signed-off-by: pengdonglin <dolinux.peng@gmail.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-14ftrace: Fix preemption accounting for stacktrace trigger commandpengdonglin
When using the stacktrace trigger command to trace syscalls, the preemption count was consistently reported as 1 when the system call event itself had 0 ("."). For example: root@ubuntu22-vm:/sys/kernel/tracing/events/syscalls/sys_enter_read $ echo stacktrace > trigger $ echo 1 > enable sshd-416 [002] ..... 232.864910: sys_read(fd: a, buf: 556b1f3221d0, count: 8000) sshd-416 [002] ...1. 232.864913: <stack trace> => ftrace_syscall_enter => syscall_trace_enter => do_syscall_64 => entry_SYSCALL_64_after_hwframe The root cause is that the trace framework disables preemption in __DO_TRACE before invoking the trigger callback. Use the tracing_gen_ctx_dec() that will accommodate for the increase of the preemption count in __DO_TRACE when calling the callback. The result is the accurate reporting of: sshd-410 [004] ..... 210.117660: sys_read(fd: 4, buf: 559b725ba130, count: 40000) sshd-410 [004] ..... 210.117662: <stack trace> => ftrace_syscall_enter => syscall_trace_enter => do_syscall_64 => entry_SYSCALL_64_after_hwframe Cc: stable@vger.kernel.org Fixes: ce33c845b030c ("tracing: Dump stacktrace trigger to the corresponding instance") Link: https://lore.kernel.org/20250512094246.1167956-1-dolinux.peng@gmail.com Signed-off-by: pengdonglin <dolinux.peng@gmail.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-05-14Merge tag 'execve-v6.15-rc7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull execve fix from Kees Cook: "This fixes a corner case for ASLR-disabled static-PIE brk collision with vdso allocations: - binfmt_elf: Move brk for static PIE even if ASLR disabled" * tag 'execve-v6.15-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: binfmt_elf: Move brk for static PIE even if ASLR disabled
2025-05-14Merge tag 'soc-fixes-6.15-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc Pull SoC fixes from Arnd Bergmann: "These all address issues in devicetree files: - The Rockchip rk3588j are now limited the same way as the vendor kernel, to allow room for the industrial-grade temperature ranges. - Seven more Rockchip fixes address minor issues with specific boards - Invalid clk controller references in multiple amlogic chips, plus one accidentally disabled audio on clock - Two devicetree fixes for i.MX8MP boards, both for incorrect regulator settings - A power domain change for apple laptop touchbar, fixing suspend/resume problems - An incorrect DMA controller setting for sophgo cv18xx chips" * tag 'soc-fixes-6.15-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: arm64: dts: amazon: Fix simple-bus node name schema warnings MAINTAINERS: delete email for Shiraz Hashim arm64: dts: imx8mp-var-som: Fix LDO5 shutdown causing SD card timeout arm64: dts: imx8mp: use 800MHz NoC OPP for nominal drive mode arm64: dts: amlogic: dreambox: fix missing clkc_audio node riscv: dts: sophgo: fix DMA data-width configuration for CV18xx arm64: dts: rockchip: fix Sige5 RTC interrupt pin arm64: dts: rockchip: Assign RT5616 MCLK rate on rk3588-friendlyelec-cm3588 arm64: dts: rockchip: Align wifi node name with bindings in CB2 arm64: dts: amlogic: g12: fix reference to unknown/untested PWM clock arm64: dts: amlogic: gx: fix reference to unknown/untested PWM clock ARM: dts: amlogic: meson8b: fix reference to unknown/untested PWM clock ARM: dts: amlogic: meson8: fix reference to unknown/untested PWM clock arm64: dts: apple: touchbar: Mark ps_dispdfr_be as always-on mailmap: Update email for Asahi Lina arm64: dts: rockchip: Fix mmc-pwrseq clock name on rock-pi-4 arm64: dts: rockchip: Use "regulator-fixed" for btreg on px30-engicam for vcc3v3-btreg arm64: dts: rockchip: Add pinmuxing for eMMC on QNAP TS433 arm64: dts: rockchip: Remove overdrive-mode OPPs from RK3588J SoC dtsi arm64: dts: rockchip: Allow Turing RK1 cooling fan to spin down