summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2020-12-09nfsd: minor nfsd4_change_attribute cleanupJ. Bruce Fields
Minor cleanup, no change in behavior. Also pull out a common helper that'll be useful elsewhere. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-12-09nfsd: simplify nfsd4_change_infoJ. Bruce Fields
It doesn't make sense to carry all these extra fields around. Just make everything into change attribute from the start. This is just cleanup, there should be no change in behavior. Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-12-09nfsd: only call inode_query_iversion in the I_VERSION caseJ. Bruce Fields
inode_query_iversion() can modify i_version. Depending on the exported filesystem, that may not be safe. For example, if you're re-exporting NFS, NFS stores the server's change attribute in i_version and does not expect it to be modified locally. This has been observed causing unnecessary cache invalidations. The way a filesystem indicates that it's OK to call inode_query_iverson() is by setting SB_I_VERSION. So, move the I_VERSION check out of encode_change(), where it's used only in GETATTR responses, to nfsd4_change_attribute(), which is also called for pre- and post- operation attributes. (Note we could also pull the NFSEXP_V4ROOT case into nfsd4_change_attribute() as well. That would actually be a no-op, since pre/post attrs are only used for metadata-modifying operations, and V4ROOT exports are read-only. But we might make the change in the future just for simplicity.) Reported-by: Daire Byrne <daire@dneg.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-12-09nfs_common: need lock during iterate through the listCheng Lin
If the elem is deleted during be iterated on it, the iteration process will fall into an endless loop. kernel: NMI watchdog: BUG: soft lockup - CPU#4 stuck for 22s! [nfsd:17137] PID: 17137  TASK: ffff8818d93c0000  CPU: 4   COMMAND: "nfsd"     [exception RIP: __state_in_grace+76]     RIP: ffffffffc00e817c  RSP: ffff8818d3aefc98  RFLAGS: 00000246     RAX: ffff881dc0c38298  RBX: ffffffff81b03580  RCX: ffff881dc02c9f50     RDX: ffff881e3fce8500  RSI: 0000000000000001  RDI: ffffffff81b03580     RBP: ffff8818d3aefca0   R8: 0000000000000020   R9: ffff8818d3aefd40     R10: ffff88017fc03800  R11: ffff8818e83933c0  R12: ffff8818d3aefd40     R13: 0000000000000000  R14: ffff8818e8391068  R15: ffff8818fa6e4000     CS: 0010  SS: 0018  #0 [ffff8818d3aefc98] opens_in_grace at ffffffffc00e81e3 [grace]  #1 [ffff8818d3aefca8] nfs4_preprocess_stateid_op at ffffffffc02a3e6c [nfsd]  #2 [ffff8818d3aefd18] nfsd4_write at ffffffffc028ed5b [nfsd]  #3 [ffff8818d3aefd80] nfsd4_proc_compound at ffffffffc0290a0d [nfsd]  #4 [ffff8818d3aefdd0] nfsd_dispatch at ffffffffc027b800 [nfsd]  #5 [ffff8818d3aefe08] svc_process_common at ffffffffc02017f3 [sunrpc]  #6 [ffff8818d3aefe70] svc_process at ffffffffc0201ce3 [sunrpc]  #7 [ffff8818d3aefe98] nfsd at ffffffffc027b117 [nfsd]  #8 [ffff8818d3aefec8] kthread at ffffffff810b88c1  #9 [ffff8818d3aeff50] ret_from_fork at ffffffff816d1607 The troublemake elem: crash> lock_manager ffff881dc0c38298 struct lock_manager {   list = {     next = 0xffff881dc0c38298,     prev = 0xffff881dc0c38298   },   block_opens = false } Fixes: c87fb4a378f9 ("lockd: NLM grace period shouldn't block NFSv4 opens") Signed-off-by: Cheng Lin <cheng.lin130@zte.com.cn> Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-12-09NFSD: Fix 5 seconds delay when doing inter server copyDai Ngo
Since commit b4868b44c5628 ("NFSv4: Wait for stateid updates after CLOSE/OPEN_DOWNGRADE"), every inter server copy operation suffers 5 seconds delay regardless of the size of the copy. The delay is from nfs_set_open_stateid_locked when the check by nfs_stateid_is_sequential fails because the seqid in both nfs4_state and nfs4_stateid are 0. Fix by modifying nfs4_init_cp_state to return the stateid with seqid 1 instead of 0. This is also to conform with section 4.8 of RFC 7862. Here is the relevant paragraph from section 4.8 of RFC 7862: A copy offload stateid's seqid MUST NOT be zero. In the context of a copy offload operation, it is inappropriate to indicate "the most recent copy offload operation" using a stateid with a seqid of zero (see Section 8.2.2 of [RFC5661]). It is inappropriate because the stateid refers to internal state in the server and there may be several asynchronous COPY operations being performed in parallel on the same file by the server. Therefore, a copy offload stateid with a seqid of zero MUST be considered invalid. Fixes: ce0887ac96d3 ("NFSD add nfs4 inter ssc to nfsd4_copy") Signed-off-by: Dai Ngo <dai.ngo@oracle.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-12-09NFSD: Fix sparse warning in nfs4proc.cChuck Lever
linux/fs/nfsd/nfs4proc.c:1542:24: warning: incorrect type in assignment (different base types) linux/fs/nfsd/nfs4proc.c:1542:24: expected restricted __be32 [assigned] [usertype] status linux/fs/nfsd/nfs4proc.c:1542:24: got int Clean-up: The dup_copy_fields() function returns only zero, so make it return void for now, and get rid of the return code check. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-12-09nfsd: Fix message level for normal terminationkazuo ito
The warning message from nfsd terminating normally can confuse system adminstrators or monitoring software. Though it's not exactly fair to pin-point a commit where it originated, the current form in the current place started to appear in: Fixes: e096bbc6488d ("knfsd: remove special handling for SIGHUP") Signed-off-by: kazuo ito <kzpn200@gmail.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2020-12-09erofs: force inplace I/O under low memory scenarioGao Xiang
Try to forcely switch to inplace I/O under low memory scenario in order to avoid direct memory reclaim due to cached page allocation. Link: https://lore.kernel.org/r/20201209123717.12430-1-hsiangkao@aol.com Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Gao Xiang <hsiangkao@redhat.com>
2020-12-09fs: quota: fix array-index-out-of-bounds bug by passing correct argument to ↵Anant Thazhemadam
vfs_cleanup_quota_inode() When dquot_resume() was last updated, the argument that got passed to vfs_cleanup_quota_inode was incorrectly set. If type = -1 and dquot_load_quota_sb() returns a negative value, then vfs_cleanup_quota_inode() gets called with -1 passed as an argument, and this leads to an array-index-out-of-bounds bug. Fix this issue by correctly passing the arguments. Fixes: ae45f07d47cc ("quota: Simplify dquot_resume()") Link: https://lore.kernel.org/r/20201208194338.7064-1-anant.thazhemadam@gmail.com Reported-by: syzbot+2643e825238d7aabb37f@syzkaller.appspotmail.com Tested-by: syzbot+2643e825238d7aabb37f@syzkaller.appspotmail.com CC: stable@vger.kernel.org Signed-off-by: Anant Thazhemadam <anant.thazhemadam@gmail.com> Signed-off-by: Jan Kara <jack@suse.cz>
2020-12-08xfs: move kernel-specific superblock validation out of libxfsDarrick J. Wong
A couple of the superblock validation checks apply only to the kernel, so move them to xfs_fc_fill_super before we add the needsrepair "feature", which will prevent the kernel (but not xfsprogs) from mounting the filesystem. This also reduces the diff between kernel and userspace libxfs. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com>
2020-12-08afs: Fix memory leak when mounting with multiple source parametersDavid Howells
There's a memory leak in afs_parse_source() whereby multiple source= parameters overwrite fc->source in the fs_context struct without freeing the previously recorded source. Fix this by only permitting a single source parameter and rejecting with an error all subsequent ones. This was caught by syzbot with the kernel memory leak detector, showing something like the following trace: unreferenced object 0xffff888114375440 (size 32): comm "repro", pid 5168, jiffies 4294923723 (age 569.948s) backtrace: slab_post_alloc_hook+0x42/0x79 __kmalloc_track_caller+0x125/0x16a kmemdup_nul+0x24/0x3c vfs_parse_fs_string+0x5a/0xa1 generic_parse_monolithic+0x9d/0xc5 do_new_mount+0x10d/0x15a do_mount+0x5f/0x8e __do_sys_mount+0xff/0x127 do_syscall_64+0x2d/0x3a entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fixes: 13fcc6837049 ("afs: Add fs_context support") Reported-by: syzbot+86dc6632faaca40133ab@syzkaller.appspotmail.com Signed-off-by: David Howells <dhowells@redhat.com> cc: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-08f2fs: fix race of pending_pages in decompressionDaeho Jeong
I found out f2fs_free_dic() is invoked in a wrong timing, but f2fs_verify_bio() still needed the dic info and it triggered the below kernel panic. It has been caused by the race condition of pending_pages value between decompression and verity logic, when the same compression cluster had been split in different bios. By split bios, f2fs_verify_bio() ended up with decreasing pending_pages value before it is reset to nr_cpages by f2fs_decompress_pages() and caused the kernel panic. [ 4416.564763] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 ... [ 4416.896016] Workqueue: fsverity_read_queue f2fs_verity_work [ 4416.908515] pc : fsverity_verify_page+0x20/0x78 [ 4416.913721] lr : f2fs_verify_bio+0x11c/0x29c [ 4416.913722] sp : ffffffc019533cd0 [ 4416.913723] x29: ffffffc019533cd0 x28: 0000000000000402 [ 4416.913724] x27: 0000000000000001 x26: 0000000000000100 [ 4416.913726] x25: 0000000000000001 x24: 0000000000000004 [ 4416.913727] x23: 0000000000001000 x22: 0000000000000000 [ 4416.913728] x21: 0000000000000000 x20: ffffffff2076f9c0 [ 4416.913729] x19: ffffffff2076f9c0 x18: ffffff8a32380c30 [ 4416.913731] x17: ffffffc01f966d97 x16: 0000000000000298 [ 4416.913732] x15: 0000000000000000 x14: 0000000000000000 [ 4416.913733] x13: f074faec89ffffff x12: 0000000000000000 [ 4416.913734] x11: 0000000000001000 x10: 0000000000001000 [ 4416.929176] x9 : ffffffff20d1f5c7 x8 : 0000000000000000 [ 4416.929178] x7 : 626d7464ff286b6b x6 : ffffffc019533ade [ 4416.929179] x5 : 000000008049000e x4 : ffffffff2793e9e0 [ 4416.929180] x3 : 000000008049000e x2 : ffffff89ecfa74d0 [ 4416.929181] x1 : 0000000000000c40 x0 : ffffffff2076f9c0 [ 4416.929184] Call trace: [ 4416.929187] fsverity_verify_page+0x20/0x78 [ 4416.929189] f2fs_verify_bio+0x11c/0x29c [ 4416.929192] f2fs_verity_work+0x58/0x84 [ 4417.050667] process_one_work+0x270/0x47c [ 4417.055354] worker_thread+0x27c/0x4d8 [ 4417.059784] kthread+0x13c/0x320 [ 4417.063693] ret_from_fork+0x10/0x18 Chao pointed this can happen by the below race condition. Thread A f2fs_post_read_wq fsverity_wq - f2fs_read_multi_pages() - f2fs_alloc_dic - dic->pending_pages = 2 - submit_bio() - submit_bio() - f2fs_post_read_work() handle first bio - f2fs_decompress_work() - __read_end_io() - f2fs_decompress_pages() - dic->pending_pages-- - enqueue f2fs_verity_work() - f2fs_verity_work() handle first bio - f2fs_verify_bio() - dic->pending_pages-- - f2fs_post_read_work() handle second bio - f2fs_decompress_work() - enqueue f2fs_verity_work() - f2fs_verify_pages() - f2fs_free_dic() - f2fs_verity_work() handle second bio - f2fs_verfy_bio() - use-after-free on dic Signed-off-by: Daeho Jeong <daehojeong@google.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2020-12-08f2fs: fix to account inline xattr correctly during recoveryChao Yu
During recovery, we may missed to update inline xattr count correctly, fix it. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2020-12-08f2fs: inline: fix wrong inline inode statJack Qiu
Miss to stat inline inode in f2fs_recover_inline_data. Signed-off-by: Jack Qiu <jack.qiu@huawei.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2020-12-08f2fs: inline: correct comment in f2fs_recover_inline_dataJack Qiu
In 3rd scene, it should remove data blocks instead of inline_data. Signed-off-by: Jack Qiu <jack.qiu@huawei.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2020-12-08f2fs: don't check PAGE_SIZE again in sanity_check_raw_super()Yangtao Li
Many flash devices read and write a single IO based on a multiple of 4KB, and we support only 4KB page cache size now. Since we already check page size in init_f2fs_fs(), so remove page size check in sanity_check_raw_super(). Signed-off-by: Yangtao Li <tiny.windzz@gmail.com> Signed-off-by: Shaohua Liu <liush@allwinnertech.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2020-12-08f2fs: convert to F2FS_*_INO macroYangtao Li
Use F2FS_ROOT_INO, F2FS_NODE_INO and F2FS_META_INO macro for better code readability. Signed-off-by: Yangtao Li <tiny.windzz@gmail.com> Signed-off-by: Shaohua Liu <liush@allwinnertech.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2020-12-08Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds
Pull seq_file fix from Al Viro: "This fixes a regression introduced in this cycle wrt iov_iter based variant for reading a seq_file" * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: fix return values of seq_read_iter()
2020-12-08io_uring: fix file leak on error path of io ctx creationHillf Danton
Put file as part of error handling when setting up io ctx to fix memory leaks like the following one. BUG: memory leak unreferenced object 0xffff888101ea2200 (size 256): comm "syz-executor355", pid 8470, jiffies 4294953658 (age 32.400s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 20 59 03 01 81 88 ff ff 80 87 a8 10 81 88 ff ff Y.............. backtrace: [<000000002e0a7c5f>] kmem_cache_zalloc include/linux/slab.h:654 [inline] [<000000002e0a7c5f>] __alloc_file+0x1f/0x130 fs/file_table.c:101 [<000000001a55b73a>] alloc_empty_file+0x69/0x120 fs/file_table.c:151 [<00000000fb22349e>] alloc_file+0x33/0x1b0 fs/file_table.c:193 [<000000006e1465bb>] alloc_file_pseudo+0xb2/0x140 fs/file_table.c:233 [<000000007118092a>] anon_inode_getfile fs/anon_inodes.c:91 [inline] [<000000007118092a>] anon_inode_getfile+0xaa/0x120 fs/anon_inodes.c:74 [<000000002ae99012>] io_uring_get_fd fs/io_uring.c:9198 [inline] [<000000002ae99012>] io_uring_create fs/io_uring.c:9377 [inline] [<000000002ae99012>] io_uring_setup+0x1125/0x1630 fs/io_uring.c:9411 [<000000008280baad>] do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 [<00000000685d8cf0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9 Reported-by: syzbot+71c4697e27c99fddcf17@syzkaller.appspotmail.com Fixes: 0f2122045b94 ("io_uring: don't rely on weak ->files references") Cc: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Hillf Danton <hdanton@sina.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-12-08btrfs: introduce ZONED feature flagNaohiro Aota
This patch introduces the ZONED incompat flag. The flag indicates that the volume management will satisfy the constraints imposed by host-managed zoned block devices (aligned chunk allocation, append-only updates, reset zone after filled). As the zoned support will happen incrementally due to enhancing some core infrastructure like super block writes, tree-log, raid support, the feature will appear in sysfs only on debug builds. It will be enabled once the support is feature complete and applications can reliably check whether zoned support is present or not. Reviewed-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: return bool from btrfs_should_end_transactionNikolay Borisov
Results in slightly smaller code. add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-11 (-11) Function old new delta btrfs_should_end_transaction 96 85 -11 Total: Before=20070, After=20059, chg -0.05% Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: return bool from should_end_transactionNikolay Borisov
Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: remove err variable from do_relocationNikolay Borisov
It simply gets assigned to 'ret' in case of errors. The flow of the while loop is not changed by this commit since the few call sites that 'goto next' will simply break from the loop. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: eliminate err variable from merge_reloc_rootNikolay Borisov
In most cases when an error is returned from a function 'ret' is simply assigned to 'err'. There is only one case where walk_up_reloc_tree can return a positive value - in this case the code breaks from the loop and ret is going to get its return value from btrfs_cow_block - either 0 or negative. This retains the old logic of how 'err' used to be set at this call site. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: remove err variable from btrfs_delete_subvolumeNikolay Borisov
Use only a single 'ret' to control whether we should abort the transaction or not. That's fine, because if we abort a transaction then btrfs_end_transaction will return the same value as passed to btrfs_abort_transaction. No semantic changes. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: unlock path before checking if extent is shared during nocow writebackFilipe Manana
When we are attempting to start writeback for an existing extent in NOCOW mode, at run_delalloc_nocow(), we must check if the extent is shared, and if it is, fallback to a COW write. However we do such check while still holding a read lock on the leaf that contains the file extent item, and that check, the call to btrfs_cross_ref_exist(), can take some time because: 1) It needs to do a search on the extent tree, which obviously takes some time, specially if delayed references are being run at the moment, as we can block when trying to lock currently write locked btree nodes; 2) It needs to check the delayed references for any existing reference for our data extent, this requires acquiring the delayed references' spinlock and maybe block on the mutex of a delayed reference head in the case where there is a delayed reference for our data extent, in the worst case it makes us release the path on the extent tree and retry the whole process again (going back to step 1). There are other operations we do while holding the leaf locked that can take some significant time as well (specially all together): * btrfs_extent_readonly() - to check if the block group containing the extent is currently in RO mode. This requires taking a spinlock and searching for the block group in a rbtree that can be big on large filesystems; * csum_exist_in_range() - to search if there are any checksums in the csum tree for the extent. Like before, this can take some time if we are in a filesystem that has both COW and NOCOW files, in which case the csum tree is not empty; * btrfs_inc_nocow_writers() - increment the number of nocow writers in the block group that contains the data extent. Needs to acquire a spinlock and search for the block group in a rbtree that can be big on large filesystems. So just unlock the leaf (release the path) before doing all those checks, since we do not need it anymore. In case we can not do a NOCOW write for the extent, due to any of those checks failing, and the writeback range goes beyond that extents' length, we will do another btree search for the next file extent item. The following script that calls dbench was used to measure the impact of this change on a VM with 8 CPUs, 16Gb of ram, using a raw NVMe device directly (no intermediary filesystem on the host) and using a non-debug kernel (default configuration on Debian): $ cat test-dbench.sh #!/bin/bash DEV=/dev/sdk MNT=/mnt/sdk MOUNT_OPTIONS="-o ssd -o nodatacow" MKFS_OPTIONS="-m single -d single" mkfs.btrfs -f $MKFS_OPTIONS $DEV mount $MOUNT_OPTIONS $DEV $MNT dbench -D $MNT -t 300 64 umount $MNT Before this change: Operation Count AvgLat MaxLat ---------------------------------------- NTCreateX 9326331 0.317 399.957 Close 6851198 0.002 6.402 Rename 394894 2.621 402.819 Unlink 1883131 0.931 398.082 Deltree 256 19.160 303.580 Mkdir 128 0.003 0.016 Qpathinfo 8452314 0.068 116.133 Qfileinfo 1481921 0.001 5.081 Qfsinfo 1549963 0.002 4.444 Sfileinfo 759679 0.084 17.079 Find 3268168 0.396 118.196 WriteX 4653310 0.056 110.993 ReadX 14618818 0.005 23.314 LockX 30364 0.003 0.497 UnlockX 30364 0.002 1.720 Flush 653619 16.954 569.299 Throughput 966.651 MB/sec 64 clients 64 procs max_latency=569.377 ms After this change: Operation Count AvgLat MaxLat ---------------------------------------- NTCreateX 9710433 0.302 232.449 Close 7132948 0.002 11.496 Rename 411144 2.452 131.805 Unlink 1960961 0.893 230.383 Deltree 256 14.858 198.646 Mkdir 128 0.002 0.005 Qpathinfo 8800890 0.066 111.588 Qfileinfo 1542556 0.001 3.852 Qfsinfo 1613835 0.002 5.483 Sfileinfo 790871 0.081 19.492 Find 3402743 0.386 120.185 WriteX 4842918 0.054 179.312 ReadX 15220407 0.005 32.435 LockX 31612 0.003 1.533 UnlockX 31612 0.002 1.047 Flush 680567 16.320 463.323 Throughput 1016.59 MB/sec 64 clients 64 procs max_latency=463.327 ms +5.0% throughput, -20.5% max latency Also, the following test using fio was run: $ cat test-fio.sh #!/bin/bash DEV=/dev/sdk MNT=/mnt/sdk MOUNT_OPTIONS="-o ssd -o nodatacow" MKFS_OPTIONS="-d single -m single" if [ $# -ne 4 ]; then echo "Use $0 NUM_JOBS FILE_SIZE FSYNC_FREQ BLOCK_SIZE" exit 1 fi NUM_JOBS=$1 FILE_SIZE=$2 FSYNC_FREQ=$3 BLOCK_SIZE=$4 cat <<EOF > /tmp/fio-job.ini [writers] rw=randwrite fsync=$FSYNC_FREQ fallocate=none group_reporting=1 direct=0 bs=$BLOCK_SIZE ioengine=sync size=$FILE_SIZE directory=$MNT numjobs=$NUM_JOBS EOF echo echo "Using fio config:" echo cat /tmp/fio-job.ini echo echo "mount options: $MOUNT_OPTIONS" echo mkfs.btrfs -f $MKFS_OPTIONS $DEV > /dev/null mount $MOUNT_OPTIONS $DEV $MNT echo "Creating nodatacow files before fio runs..." for ((i = 0; i < $NUM_JOBS; i++)); do xfs_io -f -c "pwrite -b 128M 0 $FILE_SIZE" "$MNT/writers.$i.0" done sync fio /tmp/fio-job.ini umount $MNT Before this change: $ ./test-fio.sh 16 512M 2 4K (...) WRITE: bw=28.3MiB/s (29.6MB/s), 28.3MiB/s-28.3MiB/s (29.6MB/s-29.6MB/s), io=8192MiB (8590MB), run=289800-289800msec After this change: $ ./test-fio.sh 16 512M 2 4K (...) WRITE: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=8192MiB (8590MB), run=262845-262845msec +9.7% throughput, -9.8% runtime Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: tree-checker: annotate all error branches as unlikelyDavid Sterba
The tree checker is called many times as it verifies metadata at read/write time. The checks follow a simple pattern: if (error_condition) { report_error(); return -EUCLEAN; } All the error reporting functions are annotated as __cold that is supposed to hint the compiler to move the statement block out of the hot path. This does not seem to happen that often. As the error condition is expected to be false almost always, we can annotate it with 'unlikely' as this satisfies one of the few use cases for the annotation. The expected outcome is a stronger hint to compiler to reorder the checks test jump to exit test jump to exit ... which can be observed in asm of eg. check_dir_item, btrfs_check_chunk_valid, check_root_item or check_leaf. There's a measurable run time improvement reported by Josef, the testing workload went from 655 MiB/s to 677 MiB/s, which is about +3%. There should be no functional changes but some of the conditions have been rewritten to produce more readable result, some lines are longer than 80, for the sake of readability. Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: remove stub device info from messages when we have no fs_infoDavid Sterba
Without a NULL fs_info the helpers will print something like BTRFS error (device <unknown>): ... This can happen in contexts where fs_info is not available at all or it's potentially unsafe due to object lifetime. The <unknown> stub does not bring much information and with the prefix makes the message unnecessarily longer. Remove it for the NULL fs_info case. BTRFS error: ... Callers can add the device information to the message itself if needed. Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: use detach_page_private() in alloc_extent_buffer()Qu Wenruo
In alloc_extent_buffer(), after we got a page from btree inode, we check if that page has private pointer attached. If attached, we check if the existing extent buffer has proper refs. If not (the eb is being freed), we will detach that private eb pointer. The point here is, we are detaching that eb pointer by calling: - ClearPagePrivate() - put_page() The put_page() here is especially confusing, as it's decreasing the ref from attach_page_private(). Without knowing that, it looks like the put_page() is for the find_or_create_page() call, confusing the reader. Since we're always modifying page private with attach_page_private() and detach_page_private(), the only open-coded detach_page_private() here is really confusing. Fix it by calling detach_page_private(). Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: use nodesize to determine if we need readahead in btrfs_lookup_bio_sumsQu Wenruo
In btrfs_lookup_bio_sums() if the bio is pretty large, we want to start readahead in the csum tree. However the threshold is an immediate number, (PAGE_SIZE * 8), from the initial btrfs merge. The meaning of the value is pretty hard to guess, especially when the immediate number is from the times when 4K sectorsize was the default and only CRC32C was supported. For the most common btrfs setup, CRC32 csum and 4K sectorsize, it means just 32K read would kick readahead, while the csum itself is only 32 bytes in size. Now let's be more reasonable by taking both csum size and node size into consideration. If the csum size for the bio is larger than one leaf, then we kick the readahead. This means for current default btrfs, the threshold will be 16M. This change should not change performance observably, thus this is mostly a readability enhancement. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: only clear EXTENT_LOCK bit in extent_invalidatepageQu Wenruo
extent_invalidatepage() will try to clear all possible bits since it's calling clear_extent_bit() with delete == 1. This is currently fine, since for btree io tree, it only utilizes EXTENT_LOCK bit. But this could be a problem for later subpage support, which will utilize extra io tree bit to represent additional info. This patch will just convert that clear_extent_bit() to unlock_extent_cached(). For current code since only EXTENT_LOCKED bit is utilized, this doesn't change the behavior, but provides a much cleaner basis for incoming subpage support. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: remove unused parameter phy_offset from btrfs_validate_metadata_bufferQu Wenruo
Parameter @phy_offset is the offset against the bio->bi_iter.bi_sector. @phy_offset is mostly for data io to lookup the csum in btrfs_io_bio. But for metadata, it's completely useless as metadata stores their own csum in its header, so we can remove it. Note: parameters @start and @end, they are not utilized at all for current sectorsize == PAGE_SIZE case, as we can grab eb directly from page. But those two parameters are very important for later subpage support, thus @start/@len are not touched here. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: scrub: remove the anonymous structure from scrub_pageQu Wenruo
That anonymous structure serve no special purpose, just replace it with regular members. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: use fixed width int type for extent_state::stateQu Wenruo
Currently the type is unsigned int which could change its width depending on the architecture. We need up to 32 bits so make it explicit. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: introduce helper to handle page status update in ↵Qu Wenruo
end_bio_extent_readpage() Introduce a new helper to handle update page status in end_bio_extent_readpage(). This will be later used for subpage support where the page status update can be more complex than now. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: add structure to keep track of extent range in end_bio_extent_readpageQu Wenruo
In end_bio_extent_readpage() we had a strange dance around extent_start/extent_len. Hidden behind the strange dance is, it's just calling endio_readpage_release_extent() on each bvec range. Here is an example to explain the original work flow: Bio is for inode 257, containing 2 pages, for range [1M, 1M+8K) end_bio_extent_extent_readpage() entered |- extent_start = 0; |- extent_end = 0; |- bio_for_each_segment_all() { | |- /* Got the 1st bvec */ | |- start = SZ_1M; | |- end = SZ_1M + SZ_4K - 1; | |- update = 1; | |- if (extent_len == 0) { | | |- extent_start = start; /* SZ_1M */ | | |- extent_len = end + 1 - start; /* SZ_1M */ | | } | | | |- /* Got the 2nd bvec */ | |- start = SZ_1M + 4K; | |- end = SZ_1M + 4K - 1; | |- update = 1; | |- if (extent_start + extent_len == start) { | | |- extent_len += end + 1 - start; /* SZ_8K */ | | } | } /* All bio vec iterated */ | |- if (extent_len) { |- endio_readpage_release_extent(tree, extent_start, extent_len, update); /* extent_start == SZ_1M, extent_len == SZ_8K, uptodate = 1 */ As the above flow shows, the existing code in end_bio_extent_readpage() is accumulates extent_start/extent_len, and when the contiguous range stops, calls endio_readpage_release_extent() for the range. However current behavior has something not really considered: - The inode can change For bio, its pages don't need to have contiguous page_offset. This means, even pages from different inodes can be packed into one bio. - bvec cross page boundary There is a feature called multi-page bvec, where bvec->bv_len can go beyond bvec->bv_page boundary. - Poor readability This patch will address the problem: - Introduce a proper structure, processed_extent, to record processed extent range - Integrate inode/start/end/uptodate check into endio_readpage_release_extent() - Add more comment on each step. This should greatly improve the readability, now in end_bio_extent_readpage() there are only two endio_readpage_release_extent() calls. - Add inode check for contiguity Now we also ensure the inode is the same one before checking if the range is contiguous. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: tests: remove invalid extent-io testQu Wenruo
In extent-io-test, there are two invalid tests: - Invalid nodesize for test_eb_bitmaps() Instead of the sectorsize and nodesize combination passed in, we're always using hand-crafted nodesize, e.g: len = (sectorsize < BTRFS_MAX_METADATA_BLOCKSIZE) ? sectorsize * 4 : sectorsize; In above case, if we have 32K page size, then we will get a length of 128K, which is beyond max node size, and obviously invalid. The common page size goes up to 64K so we haven't hit that - Invalid extent buffer bytenr For 64K page size, the only combination we're going to test is sectorsize = nodesize = 64K. However, in that case we will try to test an eb which bytenr is not sectorsize aligned: /* Do it over again with an extent buffer which isn't page-aligned. */ eb = __alloc_dummy_extent_buffer(fs_info, nodesize / 2, len); Sector alignment is a hard requirement for any sector size. The only exception is superblock. But anything else should follow sector size alignment. This is definitely an invalid test case. This patch will fix both problems by: - Honor the sectorsize/nodesize combination Now we won't bother to hand-craft the length and use it as nodesize. - Use sectorsize as the 2nd run extent buffer start This would test the case where extent buffer is aligned to sectorsize but not always aligned to nodesize. Please note that, later subpage related cleanup will reduce extent_buffer::pages[] to exactly what we need, making the sector unaligned extent buffer operations cause problems. Since only extent_io self tests utilize this, this patch is required for all later cleanup/refactoring. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: sysfs: remove unneeded semicolonTom Rix
A semicolon is not needed after a switch statement. Signed-off-by: Tom Rix <trix@redhat.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: simplify return values in setup_nodes_for_searchNikolay Borisov
The function is needlessly convoluted. Fix that by: * removing redundant sret variable definition in both if arms * replace the again/done labels with direct return statements, the function is short enough and doesn't do anything special upon exit * remove BUG_ON on split_node returning a positive number - it can't happen as split_node returns either 0 or a negative error code. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: remove useless return value statement in split_nodeNikolay Borisov
At the point when we set 'ret = 0' it's guaranteed that the function is going to return 0 so directly return 0. No functional changes. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: remove unnecessary attempt to drop extent maps after adding inline extentFilipe Manana
At inode.c:cow_file_range_inline(), after we insert the inline extent in the fs/subvolume btree, we call btrfs_drop_extent_cache() to drop all extent maps in the file range, however that is not necessary because we have already done it in the call to btrfs_drop_extents(), which calls btrfs_drop_extent_cache() for us, and since at this point we have the file range locked in the inode's iotree (we are in the writeback path), we know no other task can come in and read stale file extent items or find none and therefore create either stale extent maps or an extent map that represents a hole. So just remove that unnecessary call to btrfs_drop_extent_cache(), as it's doing nothing and only wasting time. This call has been around since 2008, introduced in commit c8b978188c9a ("Btrfs: Add zlib compression support"), but even back then it seems it was not necessary, since we had the range locked in the inode's iotree and the call to btrfs_drop_extents() already used to always call btrfs_drop_extent_cache(). Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: stop incrementing log batch when joining log transactionFilipe Manana
When joining a log transaction we acquire the root's log mutex, then increment the root's log batch and log writers counters while holding the mutex. However we don't need to increment the log batch there, because we are holding the mutex and incremented the log writers counter as well, so any other task trying to sync log will wait for the current task to finish its logging and still achieve the desired log batching. Since the log batch counter is an atomic counter and is incremented twice at the very beginning of the fsync callback (btrfs_sync_file()), once before flushing delalloc and once again after waiting for writeback to complete, eliminating its increment when joining the log transaction may provide some performance gains in case we have multiple concurrent tasks doing fsyncs against different files in the same subvolume, as it reduces contention on the atomic (locking the cacheline and bouncing it). When testing fio with 32 jobs, on a 8 cores VM, doing fsyncs against different files of the same subvolume, on top of a zram device, I could consistently see gains (higher throughput) between 1% to 2%, which is a very low value and possibly hard to be observed with a real device (I couldn't observe consistent gains with my low/mid end NVMe device). So this change is mostly motivated to just simplify the logic, as updating the log batch counter is only relevant when an fsync starts and while not holding the root's log mutex. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: skip unnecessary searches for xattrs when logging an inodeFilipe Manana
Every time we log an inode we lookup in the fs/subvol tree for xattrs and if we have any, log them into the log tree. However it is very common to have inodes without any xattrs, so doing the search wastes times, but more importantly it adds contention on the fs/subvol tree locks, either making the logging code block and wait for tree locks or making the logging code making other concurrent operations block and wait. The most typical use cases where xattrs are used are when capabilities or ACLs are defined for an inode, or when SELinux is enabled. This change makes the logging code detect when an inode does not have xattrs and skip the xattrs search the next time the inode is logged, unless the inode is evicted and loaded again or a xattr is added to the inode. Therefore skipping the search for xattrs on inodes that don't ever have xattrs and are fsynced with some frequency. The following script that calls dbench was used to measure the impact of this change on a VM with 8 CPUs, 16Gb of ram, using a raw NVMe device directly (no intermediary filesystem on the host) and using a non-debug kernel (default configuration on Debian distributions): $ cat test.sh #!/bin/bash DEV=/dev/sdk MNT=/mnt/sdk MOUNT_OPTIONS="-o ssd" mkfs.btrfs -f -m single -d single $DEV mount $MOUNT_OPTIONS $DEV $MNT dbench -D $MNT -t 200 40 umount $MNT The results before this change: Operation Count AvgLat MaxLat ---------------------------------------- NTCreateX 5761605 0.172 312.057 Close 4232452 0.002 10.927 Rename 243937 1.406 277.344 Unlink 1163456 0.631 298.402 Deltree 160 11.581 221.107 Mkdir 80 0.003 0.005 Qpathinfo 5221410 0.065 122.309 Qfileinfo 915432 0.001 3.333 Qfsinfo 957555 0.003 3.992 Sfileinfo 469244 0.023 20.494 Find 2018865 0.448 123.659 WriteX 2874851 0.049 118.529 ReadX 9030579 0.004 21.654 LockX 18754 0.003 4.423 UnlockX 18754 0.002 0.331 Flush 403792 10.944 359.494 Throughput 908.444 MB/sec 40 clients 40 procs max_latency=359.500 ms The results after this change: Operation Count AvgLat MaxLat ---------------------------------------- NTCreateX 6442521 0.159 230.693 Close 4732357 0.002 10.972 Rename 272809 1.293 227.398 Unlink 1301059 0.563 218.500 Deltree 160 7.796 54.887 Mkdir 80 0.008 0.478 Qpathinfo 5839452 0.047 124.330 Qfileinfo 1023199 0.001 4.996 Qfsinfo 1070760 0.003 5.709 Sfileinfo 524790 0.033 21.765 Find 2257658 0.314 125.611 WriteX 3211520 0.040 232.135 ReadX 10098969 0.004 25.340 LockX 20974 0.003 1.569 UnlockX 20974 0.002 3.475 Flush 451553 10.287 331.037 Throughput 1011.77 MB/sec 40 clients 40 procs max_latency=331.045 ms +10.8% throughput, -8.2% max latency Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: merge __set_extent_bit and set_extent_bitNikolay Borisov
There are only 2 direct calls to set_extent_bit outside of extent-io - in btrfs_find_new_delalloc_bytes and btrfs_truncate_block, the rest are thin wrappers around __set_extent_bit. This adds unnecessary indirection and just makes it more annoying when looking at the various extent bit manipulation functions. This patch renames __set_extent_bit to set_extent_bit effectively removing a level of indirection. No functional changes. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> [ reformat and remove __must_check ] Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: make btrfs_update_inode_fallback take btrfs_inodeNikolay Borisov
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: make btrfs_cont_expand take btrfs_inodeNikolay Borisov
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: make btrfs_truncate_block take btrfs_inodeNikolay Borisov
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: make btrfs_insert_replace_extent take btrfs_inodeNikolay Borisov
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: make find_first_non_hole take btrfs_inodeNikolay Borisov
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-12-08btrfs: make maybe_insert_hole take btrfs_inodeNikolay Borisov
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>