summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
9 daysns: simplify ns_common_init() furtherChristian Brauner
Simply derive the ns operations from the namespace type. Acked-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org>
9 daysxfs: scrub: use kstrdup_const() for metapath scan setupsDmitry Antipov
Except 'xchk_setup_metapath_rtginode()' case, 'path' argument of 'xchk_setup_metapath_scan()' is a compile-time constant. So it may be reasonable to use 'kstrdup_const()' / 'kree_const()' to manage 'path' field of 'struct xchk_metapath' in attempt to reuse .rodata instance rather than making a copy. Compile tested only. Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Carlos Maiolino <cem@kernel.org>
9 daysxfs: use bt_nr_sectors in xfs_dax_translate_rangeChristoph Hellwig
Only ranges inside the file system can be translated, and the file system can be smaller than the containing device. Fixes: f4ed93037966 ("xfs: don't shut down the filesystem for media failures beyond end of log") Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Carlos Maiolino <cem@kernel.org>
9 daysxfs: track the number of blocks in each buftargChristoph Hellwig
Add a bt_nr_sectors to track the number of sector in each buftarg, and replace the check that hard codes sb_dblock in xfs_buf_map_verify with this new value so that it is correct for non-ddev buftargs. The RT buftarg only has a superblock in the first block, so it is unlikely to trigger this, or are we likely to ever have enough blocks in the in-memory buftargs, but we might as well get the check right. Fixes: 10616b806d1d ("xfs: fix _xfs_buf_find oops on blocks beyond the filesystem end") Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Carlos Maiolino <cem@kernel.org>
9 daysbtrfs: rework error handling of run_delalloc_nocow()Qu Wenruo
Currently the error handling of run_delalloc_nocow() modifies @cur_offset to handle different parts of the delalloc range. However the error handling can always be split into 3 parts: 1) The range with ordered extents allocated (OE cleanup) We have to cleanup the ordered extents and unlock the folios. 2) The range that have been cleaned up (skip) For now it's only for the range of fallback_to_cow(). We should not touch it at all. 3) The range that is not yet touched (untouched) We have to unlock the folios and clear any reserved space. This 3 ranges split has the same principle as cow_file_range(), however the NOCOW/COW handling makes the above 3 range split much more complex: a) Failure immediately after a successful OE allocation Thus no @cow_start nor @cow_end set. start cur_offset end | OE cleanup | untouched | b) Failure after hitting a COW range but before calling fallback_to_cow() start cow_start cur_offset end | OE Cleanup | untouched | c) Failure to call fallback_to_cow() start cow_start cow_end end | OE Cleanup | skip | untouched | Instead of modifying @cur_offset, do proper range calculation for OE-cleanup and untouched ranges using above 3 cases with proper range charts. This avoid updating @cur_offset, as it will an extra output for debug purposes later, and explain the behavior easier. Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: add mount option for ref_trackerLeo Martins
The ref_tracker infrastructure aids debugging but is not enabled by default as it has a performance impact. Add mount option 'ref_tracker' so it can be selectively enabled on a filesystem. Currently it track references of 'delayed inodes'. Signed-off-by: Leo Martins <loemra.dev@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: print leaked references in kill_all_delayed_nodes()Leo Martins
We are seeing soft lockups in kill_all_delayed_nodes due to a delayed_node with a lingering reference count of 1. Printing at this point will reveal the guilty stack trace. If the delayed_node has no references there should be no output. Signed-off-by: Leo Martins <loemra.dev@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: implement ref_tracker for delayed_nodesLeo Martins
Add ref_tracker infrastructure for struct btrfs_delayed_node. It is a response to the largest btrfs related crash in our fleet. We're seeing soft lockups in btrfs_kill_all_delayed_nodes() that seem to be a result of delayed_nodes not being released properly. A ref_tracker object is allocated on reference count increases and freed on reference count decreases. The ref_tracker object stores a stack trace of where it is allocated. The ref_tracker_dir object is embedded in btrfs_delayed_node and keeps track of all current and some old/freed ref_tracker objects. When a leak is detected we can print the stack traces for all ref_trackers that have not yet been freed. Here is a common example of taking a reference to a delayed_node and freeing it with ref_tracker. struct btrfs_ref_tracker tracker; struct btrfs_delayed_node *node; node = btrfs_get_delayed_node(inode, &tracker); // use delayed_node... btrfs_release_delayed_node(node, &tracker); There are two special cases where the delayed_node reference is "long lived", meaning that the thread that takes the reference and the thread that releases the reference are different. The 'inode_cache_tracker' tracks the delayed_node stored in btrfs_inode. The 'node_list_tracker' tracks the delayed_node stored in the btrfs_delayed_root node_list/prepare_list. These trackers are embedded in the btrfs_delayed_node. btrfs_ref_tracker and btrfs_ref_tracker_dir are wrappers that either compile to the corresponding ref_tracker structs or empty structs depending on CONFIG_BTRFS_DEBUG. There are also btrfs wrappers for the ref_tracker API. Signed-off-by: Leo Martins <loemra.dev@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: convert several int parameters to boolDavid Sterba
We're almost done cleaning misused int/bool parameters. Convert a bunch of them, found by manual grepping. Note that btrfs_sync_fs() needs an int as it's mandated by the struct super_operations prototype. Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: move ref-verify under CONFIG_BTRFS_DEBUGLeo Martins
Remove CONFIG_BTRFS_FS_REF_VERIFY Kconfig and add it as part of CONFIG_BTRFS_DEBUG. This should not be impactful to the performance of debug. The struct btrfs_ref takes an additional u64, btrfs_fs_info takes an additional spinlock_t and rb_root. All of the ref_verify logic is still protected by a mount option. Signed-off-by: Leo Martins <loemra.dev@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: use PTR_ERR_OR_ZERO() to simplify code inbtrfs_control_ioctl()Xichao Zhao
Use the standard error pointer macro to simplify the code. Reviewed-by: Daniel Vacek <neelx@suse.com> Signed-off-by: Xichao Zhao <zhao.xichao@vivo.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: simplify support block size checkQu Wenruo
Currently we manually check the block size against 3 different values: - 4K - PAGE_SIZE - MIN_BLOCKSIZE Those 3 values can match or differ from each other. This makes it pretty complex to output the supported block sizes. Considering we're going to add block size > page size support soon, this can make the support block size sysfs attribute much harder to implement. To make it easier, factor out a helper, btrfs_supported_blocksize() to do a simple check for the block size. Then utilize it in the two locations: - btrfs_validate_super() This is very straightforward - supported_sectorsizes_show() Iterate through all valid block sizes, and only output supported ones. This is to make future full range block sizes support much easier. Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: use blocksize to check if compression is making things largerQu Wenruo
[BEHAVIOR DIFFERENCE BETWEEN COMPRESSION ALGOS] Currently LZO compression algorithm will check if we're making the compressed data larger after compressing more than 2 blocks. But zlib and zstd do the same checks after compressing more than 8192 bytes. This is not a big deal, but since we're already supporting larger block size (e.g. 64K block size if page size is also 64K), this check is not suitable for all block sizes. For example, if our page and block size are both 16KiB, and after the first block compressed using zlib, the resulted compressed data is slightly larger than 16KiB, we will immediately abort the compression. This makes zstd and zlib compression algorithms to behave slightly different from LZO, which only aborts after compressing two blocks. [ENHANCEMENT] To unify the behavior, only abort the compression after compressing at least two blocks. Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: pass btrfs_inode pointer directly into btrfs_compress_folios()Qu Wenruo
For the 3 supported compression algorithms, two of them (zstd and zlib) are already grabbing the btrfs inode for error messages. It's more common to pass btrfs_inode and grab the address space from it. Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: zoned: refine extent allocator hint selectionNaohiro Aota
The hint block group selection in the extent allocator is wrong in the first place, as it can select the dedicated data relocation block group for the normal data allocation. Since we separated the normal data space_info and the data relocation space_info, we can easily identify a block group is for data relocation or not. Do not choose it for the normal data allocation. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: try to search for data csums in commit rootBoris Burkov
If you run a workload with: - a cgroup that does tons of parallel data reading, with a working set much larger than its memory limit - a second cgroup that writes relatively fewer files, with overwrites, with no memory limit (see full code listing at the bottom for a reproducer) Then what quickly occurs is: - we have a large number of threads trying to read the csum tree - we have a decent number of threads deleting csums running delayed refs - we have a large number of threads in direct reclaim and thus high memory pressure The result of this is that we writeback the csum tree repeatedly mid transaction, to get back the extent_buffer folios for reclaim. As a result, we repeatedly COW the csum tree for the delayed refs that are deleting csums. This means repeatedly write locking the higher levels of the tree. As a result of this, we achieve an unpleasant priority inversion. We have: - a high degree of contention on the csum root node (and other upper nodes) eb rwsem - a memory starved cgroup doing tons of reclaim on CPU. - many reader threads in the memory starved cgroup "holding" the sem as readers, but not scheduling promptly. i.e., task __state == 0, but not running on a cpu. - btrfs_commit_transaction stuck trying to acquire the sem as a writer. (running delayed_refs, deleting csums for unreferenced data extents) This results in arbitrarily long transactions. This then results in seriously degraded performance for any cgroup using the filesystem (the victim cgroup in the script). It isn't an academic problem, as we see this exact problem in production at Meta with one cgroup over its memory limit ruining btrfs performance for the whole system, stalling critical system services that depend on btrfs syncs. The underlying scheduling "problem" with global rwsems is sort of thorny and apparently well known and was discussed at LPC 2024, for example. As a result, our main lever in the short term is just trying to reduce contention on our various rwsems with an eye to reducing the frequency of write locking, to avoid disabling the read lock fast acquisition path. Luckily, it seems likely that many reads are for old extents written many transactions ago, and that for those we *can* in fact search the commit root. The commit_root_sem only gets taken write once, near the end of transaction commit, no matter how much memory pressure there is, so we have much less contention between readers and writers. This change detects when we are trying to read an old extent (according to extent map generation) and then wires that through bio_ctrl to the btrfs_bio, which unfortunately isn't allocated yet when we have this information. When we go to lookup the csums in lookup_bio_sums we can check this condition on the btrfs_bio and do the commit root lookup accordingly. Note that a single bio_ctrl might collect a few extent_maps into a single bio, so it is important to track a maximum generation across all the extent_maps used for each bio to make an accurate decision on whether it is valid to look in the commit root. If any extent_map is updated in the current generation, we can't use the commit root. To test and reproduce this issue, I used the following script and accompanying C program (to avoid bottlenecks in constantly forking thousands of dd processes): ====== big-read.c ====== #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <sys/mman.h> #include <sys/stat.h> #include <unistd.h> #include <errno.h> #define BUF_SZ (128 * (1 << 10UL)) int read_once(int fd, size_t sz) { char buf[BUF_SZ]; size_t rd = 0; int ret = 0; while (rd < sz) { ret = read(fd, buf, BUF_SZ); if (ret < 0) { if (errno == EINTR) continue; fprintf(stderr, "read failed: %d\n", errno); return -errno; } else if (ret == 0) { break; } else { rd += ret; } } return rd; } int read_loop(char *fname) { int fd; struct stat st; size_t sz = 0; int ret; while (1) { fd = open(fname, O_RDONLY); if (fd == -1) { perror("open"); return 1; } if (!sz) { if (!fstat(fd, &st)) { sz = st.st_size; } else { perror("stat"); return 1; } } ret = read_once(fd, sz); close(fd); } } int main(int argc, char *argv[]) { int fd; struct stat st; off_t sz; char *buf; int ret; if (argc != 2) { fprintf(stderr, "Usage: %s <filename>\n", argv[0]); return 1; } return read_loop(argv[1]); } ====== repro.sh ====== #!/usr/bin/env bash SCRIPT=$(readlink -f "$0") DIR=$(dirname "$SCRIPT") dev=$1 mnt=$2 shift shift CG_ROOT=/sys/fs/cgroup BAD_CG=$CG_ROOT/bad-nbr GOOD_CG=$CG_ROOT/good-nbr NR_BIGGOS=1 NR_LITTLE=10 NR_VICTIMS=32 NR_VILLAINS=512 START_SEC=$(date +%s) _elapsed() { echo "elapsed: $(($(date +%s) - $START_SEC))" } _stats() { local sysfs=/sys/fs/btrfs/$(findmnt -no UUID $dev) echo "================" date _elapsed cat $sysfs/commit_stats cat $BAD_CG/memory.pressure } _setup_cgs() { echo "+memory +cpuset" > $CG_ROOT/cgroup.subtree_control mkdir -p $GOOD_CG mkdir -p $BAD_CG echo max > $BAD_CG/memory.max # memory.high much less than the working set will cause heavy reclaim echo $((1 << 30)) > $BAD_CG/memory.high # victims get a subset of villain CPUs echo 0 > $GOOD_CG/cpuset.cpus echo 0,1,2,3 > $BAD_CG/cpuset.cpus } _kill_cg() { local cg=$1 local attempts=0 echo "kill cgroup $cg" [ -f $cg/cgroup.procs ] || return while true; do attempts=$((attempts + 1)) echo 1 > $cg/cgroup.kill sleep 1 procs=$(wc -l $cg/cgroup.procs | cut -d' ' -f1) [ $procs -eq 0 ] && break done rmdir $cg echo "killed cgroup $cg in $attempts attempts" } _biggo_vol() { echo $mnt/biggo_vol.$1 } _biggo_file() { echo $(_biggo_vol $1)/biggo } _subvoled_biggos() { total_sz=$((10 << 30)) per_sz=$((total_sz / $NR_VILLAINS)) dd_count=$((per_sz >> 20)) echo "create $NR_VILLAINS subvols with a file of size $per_sz bytes for a total of $total_sz bytes." for i in $(seq $NR_VILLAINS) do btrfs subvol create $(_biggo_vol $i) &>/dev/null dd if=/dev/zero of=$(_biggo_file $i) bs=1M count=$dd_count &>/dev/null done echo "done creating subvols." } _setup() { [ -f .done ] && rm .done findmnt -n $dev && exit 1 if [ -f .re-mkfs ]; then mkfs.btrfs -f -m single -d single $dev >/dev/null || exit 2 else echo "touch .re-mkfs to populate the test fs" fi mount -o noatime $dev $mnt || exit 3 [ -f .re-mkfs ] && _subvoled_biggos _setup_cgs } _my_cleanup() { echo "CLEANUP!" _kill_cg $BAD_CG _kill_cg $GOOD_CG sleep 1 umount $mnt } _bad_exit() { _err "Unexpected Exit! $?" _stats exit $? } trap _my_cleanup EXIT trap _bad_exit INT TERM _setup # Use a lot of page cache reading the big file _villain() { local i=$1 echo $BASHPID > $BAD_CG/cgroup.procs $DIR/big-read $(_biggo_file $i) } # Hit del_csum a lot by overwriting lots of small new files _victim() { echo $BASHPID > $GOOD_CG/cgroup.procs i=0; while (true) do local tmp=$mnt/tmp.$i dd if=/dev/zero of=$tmp bs=4k count=2 >/dev/null 2>&1 i=$((i+1)) [ $i -eq $NR_LITTLE ] && i=0 done } _one_sync() { echo "sync..." before=$(date +%s) sync after=$(date +%s) echo "sync done in $((after - before))s" _stats } # sync in a loop _sync() { echo "start sync loop" syncs=0 echo $BASHPID > $GOOD_CG/cgroup.procs while true do [ -f .done ] && break _one_sync syncs=$((syncs + 1)) [ -f .done ] && break sleep 10 done if [ $syncs -eq 0 ]; then echo "do at least one sync!" _one_sync fi echo "sync loop done." } _sleep() { local time=${1-60} local now=$(date +%s) local end=$((now + time)) while [ $now -lt $end ]; do echo "SLEEP: $((end - now))s left. Sleep 10." sleep 10 now=$(date +%s) done } echo "start $NR_VILLAINS villains" for i in $(seq $NR_VILLAINS) do _villain $i & disown # get rid of annoying log on kill (done via cgroup anyway) done echo "start $NR_VICTIMS victims" for i in $(seq $NR_VICTIMS) do _victim & disown done _sync & SYNC_PID=$! _sleep $1 _elapsed touch .done wait $SYNC_PID echo "OK" exit 0 Without this patch, that reproducer: - Ran for 6+ minutes instead of 60s - Hung hundreds of threads in D state on the csum reader lock - Got a commit stuck for 3 minutes sync done in 388s ================ Wed Jul 9 09:52:31 PM UTC 2025 elapsed: 420 commits 2 cur_commit_ms 0 last_commit_ms 159446 max_commit_ms 159446 total_commit_ms 160058 some avg10=99.03 avg60=98.97 avg300=75.43 total=418033386 full avg10=82.79 avg60=80.52 avg300=59.45 total=324995274 419 hits state R, D comms big-read btrfs_tree_read_lock_nested btrfs_read_lock_root_node btrfs_search_slot btrfs_lookup_csum btrfs_lookup_bio_sums btrfs_submit_bbio 1 hits state D comms btrfs-transacti btrfs_tree_lock_nested btrfs_lock_root_node btrfs_search_slot btrfs_del_csums __btrfs_run_delayed_refs btrfs_run_delayed_refs With the patch, the reproducer exits naturally, in 65s, completing a pretty decent 4 commits, despite heavy memory pressure. Occasionally you can still trigger a rather long commit (couple seconds) but never one that is minutes long. sync done in 3s ================ elapsed: 65 commits 4 cur_commit_ms 0 last_commit_ms 485 max_commit_ms 689 total_commit_ms 2453 some avg10=98.28 avg60=64.54 avg300=19.39 total=64849893 full avg10=74.43 avg60=48.50 avg300=14.53 total=48665168 some random rwalker samples showed the most common stack in reclaim, rather than the csum tree: 145 hits state R comms bash, sleep, dd, shuf shrink_folio_list shrink_lruvec shrink_node do_try_to_free_pages try_to_free_mem_cgroup_pages reclaim_high Link: https://lpc.events/event/18/contributions/1883/ Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Boris Burkov <boris@bur.io> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: remove duplicate inclusion of linux/types.hJiapeng Chong
In messages.h there's linux/types.h included more than once. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=22939 Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: zoned: return error from btrfs_zone_finish_endio()Johannes Thumshirn
Now that btrfs_zone_finish_endio_workfn() is directly calling do_zone_finish() the only caller of btrfs_zone_finish_endio() is btrfs_finish_one_ordered(). btrfs_finish_one_ordered() already has error handling in-place so btrfs_zone_finish_endio() can return an error if the block group lookup fails. Also as btrfs_zone_finish_endio() already checks for zoned filesystems and returns early, there's no need to do this in the caller. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: zoned: directly call do_zone_finish() from ↵Johannes Thumshirn
btrfs_zone_finish_endio_workfn() When btrfs_zone_finish_endio_workfn() is calling btrfs_zone_finish_endio() it already has a pointer to the block group. Furthermore btrfs_zone_finish_endio() does additional checks if the block group can be finished or not. But in the context of btrfs_zone_finish_endio_workfn() only the actual call to do_zone_finish() is of interest, as the skipping condition when there is still room to allocate from the block group cannot be checked. Directly call do_zone_finish() on the block group. Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: collapse unaccount_log_buffer() into clean_log_buffer()Filipe Manana
There's one only one caller of unaccount_log_buffer() and both this function and the caller are short, so move its code into the caller. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: use local key variable to pass arguments in replay_one_extent()Filipe Manana
Instead of extracting again the disk_bytenr and disk_num_bytes values from the file extent item to pass to btrfs_qgroup_trace_extent(), use the key local variable 'ins' which already has those values, reducing the size of the source code. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: process inline extent earlier in replay_one_extent()Filipe Manana
Instead of having an if statement to check for regular and prealloc extents first, process them in a block, and then following with an else statement to check for an inline extent, check for an inline extent first, process it and jump to the 'update_inode' label, allowing us to avoid having the code for processing regular and prealloc extents inside a block, reducing the high indentation level by one and making the code easier to read and avoid line splittings too. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: exit early when replaying hole file extent item from a log treeFilipe Manana
At replay_one_extent(), we can jump to the code that updates the file extent range and updates the inode when processing a file extent item that represents a hole and we don't have the NO_HOLES feature enabled. This helps reduce the high indentation level by one in replay_one_extent() and avoid splitting some lines to make the code easier to read. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: abort transaction where errors happen during log tree replayFilipe Manana
In the replay_one_buffer() log tree walk callback we return errors to the log tree walk caller and then the caller aborts the transaction, if we have one, or turns the fs into error state if we don't have one. While this reduces code it makes it harder to figure out where exactly an error came from. So add the transaction aborts after every failure inside the replay_one_buffer() callback and the functions it calls, making it as fine grained as possible, so that it helps figuring out why failures happen. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: return real error from read_alloc_one_name() in drop_one_dir_item()Filipe Manana
If read_alloc_one_name() we explicitly return -ENOMEM and currently that is fine since it's the only error read_alloc_one_name() can return for now. However this is fragile and not future proof, so return instead what read_alloc_one_name() returned. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: use local variable for the transaction handle in replay_one_buffer()Filipe Manana
Instead of keep dereferencing the walk_control structure to extract the transaction handle whenever is needed, do it once by storing it in a local variable and then use the variable everywhere. This reduces code verbosity and eliminates the need for some split lines. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: abort transaction in the process_one_buffer() log tree walk callbackFilipe Manana
In the process_one_buffer() log tree walk callback we return errors to the log tree walk caller and then the caller aborts the transaction, if we have one, or turns the fs into error state if we don't have one. While this reduces code it makes it harder to figure out where exactly an error came from. So add the transaction aborts after every failure inside the process_one_buffer() callback, so that it helps figuring out why failures happen. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: abort transaction on specific error places when walking log treeFilipe Manana
We do several things while walking a log tree (for replaying and for freeing a log tree) like reading extent buffers and cleaning them up, but we don't immediately abort the transaction, or turn the fs into an error state, when one of these things fails. Instead we the transaction abort or turn the fs into error state in the caller of the entry point function that walks a log tree - walk_log_tree() - which means we don't get to know exactly where an error came from. Improve on this by doing a transaction abort / turn fs into error state after each such failure so that when it happens we have a better understanding where the failure comes from. This deliberately leaves the transaction abort / turn fs into error state in the callers of walk_log_tree() as to ensure we don't get into an inconsistent state in case we forget to do it deeper in call chain. It also deliberately does not do it after errors from the calls to the callback defined in struct walk_control::process_func(), as we will do it later on another patch. Reviewed-by: Boris Burkov <boris@bur.io> Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 daysbtrfs: replace double boolean parameters of cow_file_range()Qu Wenruo
The function cow_file_range() has two boolean parameters. Replace it with a single @flags parameter, with two flags: - COW_FILE_RANGE_NO_INLINE - COW_FILE_RANGE_KEEP_LOCKED And since we're here, also update the comments of cow_file_range() to replace the old "page" usage with "folio". Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
9 dayssmb: client: handle unlink(2) of files open by different clientsPaulo Alcantara
In order to identify whether a certain file is open by a different client, start the unlink process by sending a compound request of CREATE(DELETE_ON_CLOSE) + CLOSE with only FILE_SHARE_DELETE bit set in smb2_create_req::ShareAccess. If the file is currently open, then the server will fail the request with STATUS_SHARING_VIOLATION, in which case we'll map it to -EBUSY, so __cifs_unlink() will fall back to silly-rename the file. This fixes the following case where open(O_CREAT) fails with -ENOENT (STATUS_DELETE_PENDING) due to file still open by a different client. * Before patch $ mount.cifs //srv/share /mnt/1 -o ...,nosharesock $ mount.cifs //srv/share /mnt/2 -o ...,nosharesock $ cd /mnt/1 $ touch foo $ exec 3<>foo $ cd /mnt/2 $ rm foo $ touch foo touch: cannot touch 'foo': No such file or directory $ exec 3>&- * After patch $ mount.cifs //srv/share /mnt/1 -o ...,nosharesock $ mount.cifs //srv/share /mnt/2 -o ...,nosharesock $ cd /mnt/1 $ touch foo $ exec 3<>foo $ cd /mnt/2 $ rm foo $ touch foo $ exec 3>&- Signed-off-by: Paulo Alcantara (Red Hat) <pc@manguebit.org> Reviewed-by: David Howells <dhowells@redhat.com> Cc: Frank Sorenson <sorenson@redhat.com> Cc: linux-cifs@vger.kernel.org Signed-off-by: Steve French <stfrench@microsoft.com>
9 dayssmb: server: use disable_work_sync in transport_rdma.cStefan Metzmacher
This makes it safer during the disconnect and avoids requeueing. It's ok to call disable_work[_sync]() more than once. Cc: Namjae Jeon <linkinjeon@kernel.org> Cc: Steve French <smfrench@gmail.com> Cc: Tom Talpey <tom@talpey.com> Cc: linux-cifs@vger.kernel.org Cc: samba-technical@lists.samba.org Fixes: 0626e6641f6b ("cifsd: add server handler for central processing and tranport layers") Signed-off-by: Stefan Metzmacher <metze@samba.org> Acked-by: Namjae Jeon <linkinjeon@kernel.org> Signed-off-by: Steve French <stfrench@microsoft.com>
9 dayssmb: server: don't use delayed_work for post_recv_credits_workStefan Metzmacher
If we are using a hardcoded delay of 0 there's no point in using delayed_work it only adds confusion. The client also uses a normal work_struct and now it is easier to move it to the common smbdirect_socket. Cc: Namjae Jeon <linkinjeon@kernel.org> Cc: Steve French <smfrench@gmail.com> Cc: Tom Talpey <tom@talpey.com> Cc: linux-cifs@vger.kernel.org Cc: samba-technical@lists.samba.org Fixes: 0626e6641f6b ("cifsd: add server handler for central processing and tranport layers") Signed-off-by: Stefan Metzmacher <metze@samba.org> Acked-by: Namjae Jeon <linkinjeon@kernel.org> Signed-off-by: Steve French <stfrench@microsoft.com>
10 daysMerge tag 'for-6.17-rc6-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull a few more btrfs fixes from David Sterba: - in tree-checker, fix wrong size of check for inode ref item - in ref-verify, handle combination of mount options that allow partially damaged extent tree (reported by syzbot) - additional validation of compression mount option to catch invalid string as level * tag 'for-6.17-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: btrfs: reject invalid compression level btrfs: ref-verify: handle damaged extent root tree btrfs: tree-checker: fix the incorrect inode ref size check
12 daysMerge tag '6.17-rc6-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6Linus Torvalds
Pull smb client fixes from Steve French: - Two unlink fixes: one for rename and one for deferred close - Four smbdirect/RDMA fixes: fix buffer leak in negotiate, two fixes for races in smbd_destroy, fix offset and length checks in recv_done * tag '6.17-rc6-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6: smb: client: fix smbdirect_recv_io leak in smbd_negotiate() error path smb: client: fix file open check in __cifs_unlink() smb: client: let smbd_destroy() call disable_work_sync(&info->post_send_credits_work) smb: client: use disable[_delayed]_work_sync in smbdirect.c smb: client: fix filename matching of deferred files smb: client: let recv_done verify data_offset, data_length and remaining_data_length
12 daysxfs: constify xfs_errortag_random_defaultChristoph Hellwig
This table is never modified, so mark it const. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Carlos Maiolino <cem@kernel.org>
12 daysns: use inode initializer for initial namespacesChristian Brauner
Just use the common helper we have. Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysns: rename to __ns_refChristian Brauner
Make it easier to grep and rename to ns_count. Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysnsfs: port to ns_ref_*() helpersChristian Brauner
Stop accessing ns.count directly. Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysmnt: port to ns_ref_*() helpersChristian Brauner
Stop accessing ns.count directly. Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysns: add ns_common_free()Christian Brauner
And drop ns_free_inum(). Anything common that can be wasted centrally should be wasted in the new common helper. Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysfs: WQ_PERCPU added to alloc_workqueue usersMarco Crivellari
Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they’re needed and reducing noise when CPUs are isolated. This patch adds a new WQ_PERCPU flag to all the fs subsystem users to explicitly request the use of the per-CPU behavior. Both flags coexist for one release cycle to allow callers to transition their calls. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND must now use WQ_PERCPU. All existing users have been updated accordingly. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://lore.kernel.org/20250916082906.77439-4-marco.crivellari@suse.com Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysfs: replace use of system_wq with system_percpu_wqMarco Crivellari
Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. system_wq is a per-CPU worqueue, yet nothing in its name tells about that CPU affinity constraint, which is very often not required by users. Make it clear by adding a system_percpu_wq to all the fs subsystem. The old wq will be kept for a few release cylces. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://lore.kernel.org/20250916082906.77439-3-marco.crivellari@suse.com Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysfs: replace use of system_unbound_wq with system_dfl_wqMarco Crivellari
Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. system_unbound_wq should be the default workqueue so as not to enforce locality constraints for random work whenever it's not required. Adding system_dfl_wq to encourage its use when unbound work should be used. The old system_unbound_wq will be kept for a few release cycles. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com> Link: https://lore.kernel.org/20250916082906.77439-2-marco.crivellari@suse.com Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysnscommon: simplify initializationChristian Brauner
There's a lot of information that namespace implementers don't need to know about at all. Encapsulate this all in the initialization helper. Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysmnt: simplify ns_common_init() handlingChristian Brauner
Assign the reserved MNT_NS_ANON_INO sentinel to anonymous mount namespaces and cleanup the initial mount ns allocation. This is just a preparatory patch and the ns->inum check in ns_common_init() will be dropped in the next patch. Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysmnt: expose pointer to init_mnt_nsChristian Brauner
There's various scenarios where we need to know whether we are in the initial set of namespaces or not to e.g., shortcut permission checking. All namespaces expose that information. Let's do that too. Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysnsfs: add missing id retrieval supportChristian Brauner
The mount namespace has supported id retrieval for a while already. Add support for the other types as well. Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysnsfs: support exhaustive file handlesChristian Brauner
Pidfd file handles are exhaustive meaning they don't require a handle on another pidfd to pass to open_by_handle_at() so it can derive the filesystem to decode in. Instead it can be derived from the file handle itself. The same is possible for namespace file handles. Reviewed-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysnsfs: support file handlesChristian Brauner
A while ago we added support for file handles to pidfs so pidfds can be encoded and decoded as file handles. Userspace has adopted this quickly and it's proven very useful. Implement file handles for namespaces as well. A process is not always able to open /proc/self/ns/. That requires procfs to be mounted and for /proc/self/ or /proc/self/ns/ to not be overmounted. However, userspace can always derive a namespace fd from a pidfd. And that always works for a task's own namespace. There's no need to introduce unnecessary behavioral differences between /proc/self/ns/ fds, pidfd-derived namespace fds, and file-handle-derived namespace fds. So namespace file handles are always decodable if the caller is located in the namespace the file handle refers to. This also allows a task to e.g., store a set of file handles to its namespaces in a file on-disk so it can verify when it gets rexeced that they're still valid and so on. This is akin to the pidfd use-case. Or just plainly for namespace comparison reasons where a file handle to the task's own namespace can be easily compared against others. Reviewed-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
12 daysmnt: support ns lookupChristian Brauner
Move the mount namespace to the generic ns lookup infrastructure. This allows us to drop a bunch of members from struct mnt_namespace. Signed-off-by: Christian Brauner <brauner@kernel.org>