summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2017-08-22xfs: Add infrastructure needed for error propagation during buffer IO failureCarlos Maiolino
With the current code, XFS never re-submit a failed buffer for IO, because the failed item in the buffer is kept in the flush locked state forever. To be able to resubmit an log item for IO, we need a way to mark an item as failed, if, for any reason the buffer which the item belonged to failed during writeback. Add a new log item callback to be used after an IO completion failure and make the needed clean ups. Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-08-22xfs: toggle readonly state around xfs_log_mount_finishEric Sandeen
When we do log recovery on a readonly mount, unlinked inode processing does not happen due to the readonly checks in xfs_inactive(), which are trying to prevent any I/O on a readonly mount. This is misguided - we do I/O on readonly mounts all the time, for consistency; for example, log recovery. So do the same RDONLY flag twiddling around xfs_log_mount_finish() as we do around xfs_log_mount(), for the same reason. This all cries out for a big rework but for now this is a simple fix to an obvious problem. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-08-22xfs: write unmount record for ro mountsEric Sandeen
There are dueling comments in the xfs code about intent for log writes when unmounting a readonly filesystem. In xfs_mountfs, we see the intent: /* * Now the log is fully replayed, we can transition to full read-only * mode for read-only mounts. This will sync all the metadata and clean * the log so that the recovery we just performed does not have to be * replayed again on the next mount. */ and it calls xfs_quiesce_attr(), but by the time we get to xfs_log_unmount_write(), it returns early for a RDONLY mount: * Don't write out unmount record on read-only mounts. Because of this, sequential ro mounts of a filesystem with a dirty log will replay the log each time, which seems odd. Fix this by writing an unmount record even for RO mounts, as long as norecovery wasn't specified (don't write a clean log record if a dirty log may still be there!) and the log device is writable. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-08-22btrfs: submit superblock io with REQ_META and REQ_PRIODavid Sterba
The superblock is also metadata of the filesystem so the relevant IO should be tagged as such. We also tag it as high priority, as it's the last block committed for metadata from a given transaction. Any delays would effectively block the whole transaction, also blocking any other operation holding the device_list_mutex. Reviewed-by: Josef Bacik <jbacik@fb.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
2017-08-21f2fs: introduce discard_granularity sysfs entryChao Yu
Commit d618ebaf0aa8 ("f2fs: enable small discard by default") enables f2fs to issue 4K size discard in real-time discard mode. However, issuing smaller discard may cost more lifetime but releasing less free space in flash device. Since f2fs has ability of separating hot/cold data and garbage collection, we can expect that small-sized invalid region would expand soon with OPU, deletion or garbage collection on valid datas, so it's better to delay or skip issuing smaller size discards, it could help to reduce overmuch consumption of IO bandwidth and lifetime of flash storage. This patch makes f2fs selectng 64K size as its default minimal granularity, and issue discard with the size which is not smaller than minimal granularity. Also it exposes discard granularity as sysfs entry for configuration in different scenario. Jaegeuk Kim: We must issue all the accumulated discard commands when fstrim is called. So, I've added pend_list_tag[] to indicate whether we should issue the commands or not. If tag sets P_ACTIVE or P_TRIM, we have to issue them. P_TRIM is set once at a time, given fstrim trigger. In addition, issue_discard_thread is calling too much due to the number of discard commands remaining in the pending list. I added a timer to control it likewise gc_thread. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-08-21f2fs: remove unused function overprovision_sectionsYunlong Song
Signed-off-by: Yunlong Song <yunlong.song@huawei.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-08-21f2fs: check hot_data for roll-forward recoveryJaegeuk Kim
We need to check HOT_DATA to truncate any previous data block when doing roll-forward recovery. Cc: <stable@vger.kernel.org> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-08-21f2fs: add tracepoint for f2fs_gcChao Yu
This patch adds tracepoint for f2fs_gc. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-08-21f2fs: retry to revoke atomic commit in -ENOMEM caseChao Yu
During atomic committing, if we encounter -ENOMEM in revoke path, it's better to give a chance to retry revoking. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-08-21f2fs: let fill_super handle roll-forward errorsJaegeuk Kim
If we set CP_ERROR_FLAG in roll-forward error, f2fs is no longer to proceed any IOs due to f2fs_cp_error(). But, for example, if some stale data is involved on roll-forward process, we're able to get -ENOENT, getting fs stuck. If we get any error, let fill_super set SBI_NEED_FSCK and try to recover back to stable point. Cc: <stable@vger.kernel.org> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-08-21f2fs: merge equivalent flags F2FS_GET_BLOCK_[READ|DIO]Qiuyang Sun
Currently, the two flags F2FS_GET_BLOCK_[READ|DIO] are totally equivalent and can be used interchangably in all scenarios they are involved in. Neither of the flags is referenced in f2fs_map_blocks(), making them both the default case. To remove the ambiguity, this patch merges both flags into F2FS_GET_BLOCK_DEFAULT, and introduces an enum for all distinct flags. Signed-off-by: Qiuyang Sun <sunqiuyang@huawei.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-08-21f2fs: support journalled quotaChao Yu
This patch supports to enable f2fs to accept quota information through mount option: - {usr,grp,prj}jquota=<quota file path> - jqfmt=<quota type> Then, in ->mount flow, we can recover quota file during log replaying, by this, journelled quota can be supported. Signed-off-by: Chao Yu <yuchao0@huawei.com> [Jaegeuk Kim: Fix wrong return values.] Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-08-21btrfs: remove unnecessary memory barrier in btrfs_direct_IONikolay Borisov
Commit 38851cc19adb ("Btrfs: implement unlocked dio write") implemented unlocked dio write, allowing multiple dio writers to write to non-overlapping, and non-eof-extending regions. In doing so it also introduced a broken memory barrier. It is broken due to 2 things: 1. Memory barriers _MUST_ always be paired, this is clearly not the case here 2. Checkpatch actually produces a warning if a memory barrier is introduced that doesn't have a comment explaining how it's being paired. Specifically for inode::i_dio_count that's wrapped inside inode_dio_begin, there is no explicit barrier semantics attached, so removing is fine as the atomic is used in common the waiter/wakeup pattern. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> [ enhance changelog ] Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: remove superfluous chunk_tree argument from btrfs_alloc_dev_extentNikolay Borisov
Currently this function is always called with the object id of the root key of the chunk_tree, which is always BTRFS_CHUNK_TREE_OBJECTID. So let's subsume it straight into the function itself. No functional change. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: Remove chunk_objectid parameter of btrfs_alloc_dev_extentNikolay Borisov
THe function is always called with chunk_objectid set to BTRFS_FIRST_CHUNK_TREE_OBJECTID. Let's collapse the parameter in the function itself. No functional changes. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: pass fs_info to btrfs_del_root instead of tree_rootJeff Mahoney
btrfs_del_roots always uses the tree_root. Let's pass fs_info instead. Signed-off-by: Jeff Mahoney <jeffm@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21Btrfs: add one more sanity check for shared ref typeLiu Bo
Every shared ref has a parent tree block, which can be get from btrfs_extent_inline_ref_offset(). And the tree block must be aligned to the nodesize, so we'd know this inline ref is not valid if this block's bytenr is not aligned to the nodesize, in which case, most likely the ref type has been misused. This adds the above mentioned check and also updates print_extent_item() called by btrfs_print_leaf() to point out the invalid ref while printing the tree structure. Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21Btrfs: remove BUG_ON in __add_tree_blockLiu Bo
The BUG_ON() can be triggered when the caller is processing an invalid extent inline ref, e.g. a shared data ref is offered instead of an extent data ref, such that it tries to find a non-existent tree block and then btrfs_search_slot returns 1 for no such item. This replaces the BUG_ON() with a WARN() followed by calling btrfs_print_leaf() to show more details about what's going on and returning -EINVAL to upper callers. Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21Btrfs: remove BUG() in add_data_referenceLiu Bo
Now that we have a helper to report invalid value of extent inline ref type, we need to quit gracefully instead of throwing out a kernel panic. Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21Btrfs: remove BUG() in print_extent_itemLiu Bo
btrfs_print_leaf() is used in btrfs_get_extent_inline_ref_type, so here we really want to print the invalid value of ref type instead of causing a kernel panic. Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21Btrfs: remove BUG() in btrfs_extent_inline_ref_sizeLiu Bo
Now that btrfs_get_extent_inline_ref_type() can report if type is a valid one and all callers can gracefully deal with that, we don't need to crash here. Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21Btrfs: convert to use btrfs_get_extent_inline_ref_typeLiu Bo
Since we have a helper which can do sanity check, this converts all btrfs_extent_inline_ref_type to it. Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21Btrfs: add a helper to retrive extent inline ref typeLiu Bo
An invalid value of extent inline ref type may be read from a malicious image which may force btrfs to crash. This adds a helper which does sanity check for the ref type, so we can know if it's sane, return he type, otherwise return an error. Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> [ minimal tweak const types, causing warnings due to other cleanup patches ] Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: scrub: simplify scrub worker initializationDavid Sterba
Minor simplification, merge calls to one. Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: scrub: clean up division in scrub_find_csumDavid Sterba
Use proper helpers for 64bit division. Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: scrub: clean up division in __scrub_mark_bitmapDavid Sterba
Use proper helpers for 64bit division and then cast to narrower type. Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: scrub: use bool for flush_all_writesDavid Sterba
flush_all_writes is an atomic but does not use the semantics at all, it's just on/off indicator, we can use bool. Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: preserve i_mode if __btrfs_set_acl() failsErnesto A. Fernández
When changing a file's acl mask, btrfs_set_acl() will first set the group bits of i_mode to the value of the mask, and only then set the actual extended attribute representing the new acl. If the second part fails (due to lack of space, for example) and the file had no acl attribute to begin with, the system will from now on assume that the mask permission bits are actual group permission bits, potentially granting access to the wrong users. Prevent this by restoring the original mode bits if __btrfs_set_acl fails. Signed-off-by: Ernesto A. Fernández <ernesto.mnd.fernandez@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: Remove extraneous chunk_objectid variableNikolay Borisov
BTRFS_FIRST_CHUNK_TREE_OBJECTIS id the only objectid being used in the chunk_tree. So remove a variable which is always set to that value and collapse its usage in callees which are passed this variable. No functional changes Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: Remove chunk_objectid argument from btrfs_make_block_groupNikolay Borisov
btrfs_make_block_group is always called with chunk_objectid set to BTRFS_FIRST_CHUNK_TREE_OBJECTID. There's no reason why this behavior will change anytime soon, so let's remove the argument and decrease the cognitive load when reading the code path. No functional change Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: Remove extra parentheses from condition in copy_items()Matthias Kaehlcke
There is no need for the extra pair of parentheses, remove it. This fixes the following warning when building with clang: fs/btrfs/tree-log.c:3694:10: warning: equality comparison with extraneous parentheses [-Wparentheses-equality] if ((i == (nr - 1))) ~~^~~~~~~~~~~ Also remove the unnecessary parentheses around the substraction. Signed-off-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: Remove redundant setting of uuid in btrfs_block_headerNikolay Borisov
btrfs_alloc_dev_extent currently unconditionally sets the uuid in the leaf block header the function is working with. This is unnecessary since this operation is peformed by the core btree handling code (splitting a node, allocating a new btree block etc). So let's remove it. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: Do not use data_alloc_cluster in ssd modeHans van Kranenburg
This patch provides a band aid to improve the 'out of the box' behaviour of btrfs for disks that are detected as being an ssd. In a general purpose mixed workload scenario, the current ssd mode causes overallocation of available raw disk space for data, while leaving behind increasing amounts of unused fragmented free space. This situation leads to early ENOSPC problems which are harming user experience and adoption of btrfs as a general purpose filesystem. This patch modifies the data extent allocation behaviour of the ssd mode to make it behave identical to nossd mode. The metadata behaviour and additional ssd_spread option stay untouched so far. Recommendations for future development are to reconsider the current oversimplified nossd / ssd distinction and the broken detection mechanism based on the rotational attribute in sysfs and provide experienced users with a more flexible way to choose allocator behaviour for data and metadata, optimized for certain use cases, while keeping sane 'out of the box' default settings. The internals of the current btrfs code have more potential than what currently gets exposed to the user to choose from. The SSD story... In the first year of btrfs development, around early 2008, btrfs gained a mount option which enables specific functionality for filesystems on solid state devices. The first occurance of this functionality is in commit e18e4809, labeled "Add mount -o ssd, which includes optimizations for seek free storage". The effect on allocating free space for doing (data) writes is to 'cluster' writes together, writing them out in contiguous space, as opposed to a 'tetris' way of putting all separate writes into any free space fragment that fits (which is what the -o nossd behaviour does). A somewhat simplified explanation of what happens is that, when for example, the 'cluster' size is set to 2MiB, when we do some writes, the data allocator will search for a free space block that is 2MiB big, and put the writes in there. The ssd mode itself might allow a 2MiB cluster to be composed of multiple free space extents with some existing data in between, while the additional ssd_spread mount option kills off this option and requires fully free space. The idea behind this is (commit 536ac8ae): "The [...] clusters make it more likely a given IO will completely overwrite the ssd block, so it doesn't have to do an internal rwm cycle."; ssd block meaning nand erase block. So, effectively this means applying a "locality based algorithm" and trying to outsmart the actual ssd. Since then, various changes have been made to the involved code, but the basic idea is still present, and gets activated whenever the ssd mount option is active. This also happens by default, when the rotational flag as seen at /sys/block/<device>/queue/rotational is set to 0. However, there's a number of problems with this approach. First, what the optimization is trying to do is outsmart the ssd by assuming there is a relation between the physical address space of the block device as seen by btrfs and the actual physical storage of the ssd, and then adjusting data placement. However, since the introduction of the Flash Translation Layer (FTL) which is a part of the internal controller of an ssd, these attempts are futile. The use of good quality FTL in consumer ssd products might have been limited in 2008, but this situation has changed drastically soon after that time. Today, even the flash memory in your automatic cat feeding machine or your grandma's wheelchair has a full featured one. Second, the behaviour as described above results in the filesystem being filled up with badly fragmented free space extents because of relatively small pieces of space that are freed up by deletes, but not selected again as part of a 'cluster'. Since the algorithm prefers allocating a new chunk over going back to tetris mode, the end result is a filesystem in which all raw space is allocated, but which is composed of underutilized chunks with a 'shotgun blast' pattern of fragmented free space. Usually, the next problematic thing that happens is the filesystem wanting to allocate new space for metadata, which causes the filesystem to fail in spectacular ways. Third, the default mount options you get for an ssd ('ssd' mode enabled, 'discard' not enabled), in combination with spreading out writes over the full address space and ignoring freed up space leads to worst case behaviour in providing information to the ssd itself, since it will never learn that all the free space left behind is actually free. There are two ways to let an ssd know previously written data does not have to be preserved, which are sending explicit signals using discard or fstrim, or by simply overwriting the space with new data. The worst case behaviour is the btrfs ssd_spread mount option in combination with not having discard enabled. It has a side effect of minimizing the reuse of free space previously written in. Fourth, the rotational flag in /sys/ does not reliably indicate if the device is a locally attached ssd. For example, iSCSI or NBD displays as non-rotational, while a loop device on an ssd shows up as rotational. The combination of the second and third problem effectively means that despite all the good intentions, the btrfs ssd mode reliably causes the ssd hardware and the filesystem structures and performance to be choked to death. The clickbait version of the title of this story would have been "Btrfs ssd optimizations considered harmful for ssds". The current nossd 'tetris' mode (even still without discard) allows a pattern of overwriting much more previously used space, causing many more implicit discards to happen because of the overwrite information the ssd gets. The actual location in the physical address space, as seen from the point of view of btrfs is irrelevant, because the actual writes to the low level flash are reordered anyway thanks to the FTL. Changes made in the code 1. Make ssd mode data allocation identical to tetris mode, like nossd. 2. Adjust and clean up filesystem mount messages so that we can easily identify if a kernel has this patch applied or not, when providing support to end users. Also, make better use of the *_and_info helpers to only trigger messages on actual state changes. Backporting notes Notes for whoever wants to backport this patch to their 4.9 LTS kernel: * First apply commit 951e7966 "btrfs: drop the nossd flag when remounting with -o ssd", or fixup the differences manually. * The rest of the conflicts are because of the fs_info refactoring. So, for example, instead of using fs_info, it's root->fs_info in extent-tree.c Signed-off-by: Hans van Kranenburg <hans.van.kranenburg@mendix.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21btrfs: use btrfsic_submit_bio instead of submit_bio in write_dev_flushLu Fengqi
Although this bio has no data attached, it will reach this condition (bio->bi_opf & REQ_PREFLUSH) and then update the flush_gen of dev_state in __btrfsic_submit_bio. So we should still submit it through integrity checker. Otherwise, the integrity checker will throw the following warning when I mount a newly created btrfs filesystem. [10264.755497] btrfs: attempt to write superblock which references block M @29523968 (sdb1/1111654400/0) which is not flushed out of disk's write cache (block flush_gen=1, dev->flush_gen=0)! [10264.755498] btrfs: attempt to write superblock which references block M @29523968 (sdb1/37912576/0) which is not flushed out of disk's write cache (block flush_gen=1, dev->flush_gen=0)! Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21Btrfs: incremental send, fix emission of invalid clone operationsFilipe Manana
When doing an incremental send it's possible that the computed send stream contains clone operations that will fail on the receiver if the receiver has compression enabled and the clone operations target a sector sized extent that starts at a zero file offset, is not compressed on the source filesystem but ends up being compressed and inlined at the destination filesystem. Example scenario: $ mkfs.btrfs -f /dev/sdb $ mount -o compress /dev/sdb /mnt # By doing a direct IO write, the data is not compressed. $ xfs_io -f -d -c "pwrite -S 0xab 0 4K" /mnt/foobar $ btrfs subvolume snapshot -r /mnt /mnt/mysnap1 $ xfs_io -c "reflink /mnt/foobar 0 8K 4K" /mnt/foobar $ btrfs subvolume snapshot -r /mnt /mnt/mysnap2 $ btrfs send -f /tmp/1.snap /mnt/mysnap1 $ btrfs send -f /tmp/2.snap -p /mnt/mysnap1 /mnt/mysnap2 $ umount /mnt $ mkfs.btrfs -f /dev/sdc $ mount -o compress /dev/sdc /mnt $ btrfs receive -f /tmp/1.snap /mnt $ btrfs receive -f /tmp/2.snap /mnt ERROR: failed to clone extents to foobar Operation not supported The same could be achieved by mounting the source filesystem without compression and doing a buffered IO write instead of a direct IO one, and mounting the destination filesystem with compression enabled. So fix this by issuing regular write operations in the send stream instead of clone operations when the source offset is zero and the range has a length matching the sector size. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21Btrfs: fix out of bounds array access while reading extent bufferLiu Bo
There is a corner case that slips through the checkers in functions reading extent buffer, ie. if (start < eb->len) and (start + len > eb->len), then a) map_private_extent_buffer() returns immediately because it's thinking the range spans across two pages, b) and the checkers in read_extent_buffer(), WARN_ON(start > eb->len) and WARN_ON(start + len > eb->start + eb->len), both are OK in this corner case, but it'd actually try to access the eb->pages out of bounds because of (start + len > eb->len). The case is found by switching extent inline ref type from shared data ref to non-shared data ref, which is a kind of metadata corruption. It'd use the wrong helper to access the eb, eg. btrfs_extent_data_ref_root(eb, ref) is used but the %ref passing here is "struct btrfs_shared_data_ref". And if the extent item happens to be the first item in the eb, then offset/length will get over eb->len which ends up an invalid memory access. This is adding proper checks in order to avoid invalid memory access, ie. 'general protection fault', before it's too late. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-21isofs: Delete an error message for a failed memory allocation in ↵Markus Elfring
isofs_read_inode() Omit an extra message for a memory allocation failure in this function. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Jan Kara <jack@suse.cz>
2017-08-21quota_v2: Delete an error message for a failed memory allocation in ↵Markus Elfring
v2_read_file_info() Omit an extra message for a memory allocation failure in this function. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Jan Kara <jack@suse.cz>
2017-08-20Merge branch 'bugfixes'Trond Myklebust
2017-08-20NFS: Fix NFSv2 security settingsChuck Lever
For a while now any NFSv2 mount where sec= is specified uses AUTH_NULL. If sec= is not specified, the mount uses AUTH_UNIX. Commit e68fd7c8071d ("mount: use sec= that was specified on the command line") attempted to address a very similar problem with NFSv3, and should have fixed this too, but it has a bug. The MNTv1 MNT procedure does not return a list of security flavors, so our client makes up a list containing just AUTH_NULL. This should enable nfs_verify_authflavors() to assign the sec= specified flavor, but instead, it incorrectly sets it to AUTH_NULL. I expect this would also be a problem for any NFSv3 server whose MNTv3 MNT procedure returned a security flavor list containing only AUTH_NULL. Fixes: e68fd7c8071d ("mount: use sec= that was specified on ... ") BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=310 Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-08-20NFSv4.1: don't use machine credentials for CLOSE when using 'sec=sys'NeilBrown
An NFSv4.1 client might close a file after the user who opened it has logged off. In this case the user's credentials may no longer be valid, if they are e.g. kerberos credentials that have expired. NFSv4.1 has a mechanism to allow the client to use machine credentials to close a file. However due to a short-coming in the RFC, a CLOSE with those credentials may not be possible if the file in question isn't exported to the same security flavor - the required PUTFH must be rejected when this is the case. Specifically if a server and client support kerberos in general and have used it to form a machine credential, but the file is only exported to "sec=sys", a PUTFH with the machine credentials will fail, so CLOSE is not possible. As RPC_AUTH_UNIX (used by sec=sys) credentials can never expire, there is no value in using the machine credential in place of them. So in that case, just use the users credentials for CLOSE etc, as you would in NFSv4.0 Signed-off-by: Neil Brown <neilb@suse.com> Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-08-20Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "Another pile of small fixes and updates for x86: - Plug a hole in the SMAP implementation which misses to clear AC on NMI entry - Fix the norandmaps/ADDR_NO_RANDOMIZE logic so the command line parameter works correctly again - Use the proper accessor in the startup64 code for next_early_pgt to prevent accessing of invalid addresses and faulting in the early boot code. - Prevent CPU hotplug lock recursion in the MTRR code - Unbreak CPU0 hotplugging - Rename overly long CPUID bits which got introduced in this cycle - Two commits which mark data 'const' and restrict the scope of data and functions to file scope by making them 'static'" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86: Constify attribute_group structures x86/boot/64/clang: Use fixup_pointer() to access 'next_early_pgt' x86/elf: Remove the unnecessary ADDR_NO_RANDOMIZE checks x86: Fix norandmaps/ADDR_NO_RANDOMIZE x86/mtrr: Prevent CPU hotplug lock recursion x86: Mark various structures and functions as 'static' x86/cpufeature, kvm/svm: Rename (shorten) the new "virtualized VMSAVE/VMLOAD" CPUID flag x86/smpboot: Unbreak CPU0 hotplug x86/asm/64: Clear AC on NMI entries
2017-08-20NFS: Remove unused parameter gfp_flags from nfs_pageio_init()Trond Myklebust
Now that the mirror allocation has been moved, the parameter can go. Also remove the redundant symbol export. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-08-20NFSv4: Fix up mirror allocationTrond Myklebust
There are a number of callers of nfs_pageio_complete() that want to continue using the nfs_pageio_descriptor without needing to call nfs_pageio_init() again. Examples include nfs_pageio_resend() and nfs_pageio_cond_complete(). The problem is that nfs_pageio_complete() also calls nfs_pageio_cleanup_mirroring(), which frees up the array of mirrors. This can lead to writeback errors, in the next call to nfs_pageio_setup_mirroring(). Fix by simply moving the allocation of the mirrors to nfs_pageio_setup_mirroring(). Link: https://bugzilla.kernel.org/show_bug.cgi?id=196709 Reported-by: JianhongYin <yin-jianhong@163.com> Cc: stable@vger.kernel.org # 4.0+ Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-08-18Merge tag 'xfs-4.13-fixes-5' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linuxLinus Torvalds
Pull xfs fixes from Darrick Wong: "A handful more bug fixes for you today. Changes since last time: - Don't leak resources when mount fails - Don't accidentally clobber variables when looking for free inodes" * tag 'xfs-4.13-fixes-5' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: xfs: don't leak quotacheck dquots when cow recovery xfs: clear MS_ACTIVE after finishing log recovery iomap: fix integer truncation issues in the zeroing and dirtying helpers xfs: fix inobt inode allocation search optimization
2017-08-18Merge branch 'writeback'Trond Myklebust
2017-08-18btrfs: Fix -EOVERFLOW handling in btrfs_ioctl_tree_search_v2Nikolay Borisov
The buffer passed to btrfs_ioctl_tree_search* functions have to be at least sizeof(struct btrfs_ioctl_search_header). If this is not the case then the ioctl should return -EOVERFLOW and set the uarg->buf_size to the minimum required size. Currently btrfs_ioctl_tree_search_v2 would return an -EOVERFLOW error with ->buf_size being set to the value passed by user space. Fix this by removing the size check and relying on search_ioctl, which already includes it and correctly sets buf_size. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Chris Mason <clm@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-18btrfs: Move skip checksum check from btrfs_submit_direct to ↵Nikolay Borisov
__btrfs_submit_dio_bio Currently the code checks whether we should do data checksumming in btrfs_submit_direct and the boolean result of this check is passed to btrfs_submit_direct_hook, in turn passing it to __btrfs_submit_dio_bio which actually consumes it. The last function actually has all the necessary context to figure out whether to skip the check or not, so let's move the check closer to where it's being consumed. No functional changes. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Chris Mason <clm@fb.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-08-18Btrfs: fix assertion failure during fsync in no-holes modeFilipe Manana
When logging an inode in full mode that has an inline compressed extent that represents a range with a size matching the sector size (currently the same as the page size), has a trailing hole and the no-holes feature is enabled, we end up failing an assertion leading to a trace like the following: [141812.031528] assertion failed: len == i_size, file: fs/btrfs/tree-log.c, line: 4453 [141812.033069] ------------[ cut here ]------------ [141812.034330] kernel BUG at fs/btrfs/ctree.h:3452! [141812.035137] invalid opcode: 0000 [#1] PREEMPT SMP [141812.035932] Modules linked in: btrfs dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio dm_flakey dm_mod dax ppdev evdev ghash_clmulni_intel pcbc aesni_intel aes_x86_64 tpm_tis psmouse crypto_simd parport_pc sg pcspkr tpm_tis_core cryptd parport serio_raw glue_helper tpm i2c_piix4 i2c_core button sunrpc loop autofs4 ext4 crc16 jbd2 mbcache raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c crc32c_generic raid1 raid0 multipath linear md_mod sd_mod ata_generic virtio_scsi ata_piix floppy crc32c_intel libata scsi_mod virtio_pci virtio_ring e1000 virtio [last unloaded: btrfs] [141812.036790] CPU: 3 PID: 845 Comm: fdm-stress Tainted: G B W 4.12.3-btrfs-next-52+ #1 [141812.036790] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org 04/01/2014 [141812.036790] task: ffff8801e6694180 task.stack: ffffc90009004000 [141812.036790] RIP: 0010:assfail.constprop.18+0x1c/0x1e [btrfs] [141812.036790] RSP: 0018:ffffc90009007bc0 EFLAGS: 00010282 [141812.036790] RAX: 0000000000000046 RBX: ffff88017512c008 RCX: 0000000000000001 [141812.036790] RDX: ffff88023fd95201 RSI: ffffffff8182264c RDI: 00000000ffffffff [141812.036790] RBP: ffffc90009007bc0 R08: 0000000000000001 R09: 0000000000000001 [141812.036790] R10: 0000000000001000 R11: ffffffff82f5a0c9 R12: ffff88014e5947e8 [141812.036790] R13: 00000000000b4000 R14: ffff8801b234d008 R15: 0000000000000000 [141812.036790] FS: 00007fdba6ffd700(0000) GS:ffff88023fd80000(0000) knlGS:0000000000000000 [141812.036790] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [141812.036790] CR2: 00007fdb9c000010 CR3: 000000016efa2000 CR4: 00000000001406e0 [141812.036790] Call Trace: [141812.036790] btrfs_log_inode+0x9f0/0xd3d [btrfs] [141812.036790] ? __mutex_lock+0x120/0x3ce [141812.036790] btrfs_log_inode_parent+0x224/0x685 [btrfs] [141812.036790] ? lock_acquire+0x16b/0x1af [141812.036790] btrfs_log_dentry_safe+0x60/0x7b [btrfs] [141812.036790] btrfs_sync_file+0x32e/0x3f8 [btrfs] [141812.036790] vfs_fsync_range+0x8a/0x9d [141812.036790] vfs_fsync+0x1c/0x1e [141812.036790] do_fsync+0x31/0x4a [141812.036790] SyS_fdatasync+0x13/0x17 [141812.036790] entry_SYSCALL_64_fastpath+0x18/0xad [141812.036790] RIP: 0033:0x7fdbac41a47d [141812.036790] RSP: 002b:00007fdba6ffce30 EFLAGS: 00000293 ORIG_RAX: 000000000000004b [141812.036790] RAX: ffffffffffffffda RBX: ffffffff81092c9f RCX: 00007fdbac41a47d [141812.036790] RDX: 0000004cf0160a40 RSI: 0000000000000000 RDI: 0000000000000006 [141812.036790] RBP: ffffc90009007f98 R08: 0000000000000000 R09: 0000000000000010 [141812.036790] R10: 00000000000002e8 R11: 0000000000000293 R12: ffffffff8110cd90 [141812.036790] R13: ffffc90009007f78 R14: 0000000000000000 R15: 0000000000000000 [141812.036790] ? time_hardirqs_off+0x9/0x14 [141812.036790] ? trace_hardirqs_off_caller+0x1f/0xa3 [141812.036790] Code: c7 d6 61 6b a0 48 89 e5 e8 ba ef a8 e0 0f 0b 55 89 f1 48 c7 c2 6d 65 6b a0 48 89 fe 48 c7 c7 81 65 6b a0 48 89 e5 e8 9c ef a8 e0 <0f> 0b 0f 1f 44 00 00 55 48 89 e5 41 57 41 56 41 55 41 54 49 89 [141812.036790] RIP: assfail.constprop.18+0x1c/0x1e [btrfs] RSP: ffffc90009007bc0 [141812.084448] ---[ end trace 44e472684c7a32cc ]--- Which happens because the code that logs a trailing hole when the no-holes feature is enabled, did not consider that a compressed inline extent can represent a range with a size matching the sector size, in which case expanding the inode's i_size, through a truncate operation, won't lead to padding with zeroes the page that represents the inline extent, and therefore the inline extent remains after the truncation. Fix this by adapting the assertion to accept inline extents representing data with a sector size length if, and only if, the inline extents are compressed. A sample and trivial reproducer (for systems with a 4K page size) for this issue: mkfs.btrfs -O no-holes -f /dev/sdc mount -o compress /dev/sdc /mnt xfs_io -f -c "pwrite -S 0xab 0 4K" /mnt/foobar sync xfs_io -c "truncate 32K" /mnt/foobar xfs_io -c "fsync" /mnt/foobar Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>