summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2016-06-05autofs braino fix for do_last()Al Viro
It's an analogue of commit 7500c38a (fix the braino in "namei: massage lookup_slow() to be usable by lookup_one_len_unlocked()"). The same problem (->lookup()-returned unhashed negative dentry just might be an autofs one with ->d_manage() that would wait until the daemon makes it positive) applies in do_last() - we need to do follow_managed() first. Fortunately, remaining callers of follow_managed() are OK - only autofs has that weirdness (negative dentry that does not mean an instant -ENOENT)) and autofs never has its negative dentries hashed, so we can't pick one from a dcache lookup. ->d_manage() is a bloody mess ;-/ Cc: stable@vger.kernel.org # v4.6 Spotted-by: Ian Kent <raven@themaw.net> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-06-04Merge branch 'for-linus-4.7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs Pull btrfs fixes from Chris Mason: "The important part of this pull is Filipe's set of fixes for btrfs device replacement. Filipe fixed a few issues seen on the list and a number he found on his own" * 'for-linus-4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: Btrfs: deal with duplciates during extent_map insertion in btrfs_get_extent Btrfs: fix race between device replace and read repair Btrfs: fix race between device replace and discard Btrfs: fix race between device replace and chunk allocation Btrfs: fix race setting block group back to RW mode during device replace Btrfs: fix unprotected assignment of the left cursor for device replace Btrfs: fix race setting block group readonly during device replace Btrfs: fix race between device replace and block group removal Btrfs: fix race between readahead and device replace/removal
2016-06-04fix EOPENSTALE bug in do_last()Al Viro
EOPENSTALE occuring at the last component of a trailing symlink ends up with do_last() retrying its lookup. After the symlink body has been discarded. The thing is, all this retry_lookup logics in there is not needed at all - the upper layers will do the right thing if we simply return that -EOPENSTALE as we would with any other error. Trying to microoptimize in do_last() is a lot of headache for no good reason. Cc: stable@vger.kernel.org # v4.2+ Tested-by: Oleg Drokin <green@linuxhacker.ru> Reviewed-and-Tested-by: Jeff Layton <jlayton@poochiereds.net> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-06-03Btrfs: deal with duplciates during extent_map insertion in btrfs_get_extentChris Mason
When dealing with inline extents, btrfs_get_extent will incorrectly try to insert a duplicate extent_map. The dup hits -EEXIST from add_extent_map, but then we try to merge with the existing one and end up trying to insert a zero length extent_map. This actually works most of the time, except when there are extent maps past the end of the inline extent. rocksdb will trigger this sometimes because it preallocates an extent and then truncates down. Josef made a script to trigger with xfs_io: #!/bin/bash xfs_io -f -c "pwrite 0 1000" inline xfs_io -c "falloc -k 4k 1M" inline xfs_io -c "pread 0 1000" -c "fadvise -d 0 1000" -c "pread 0 1000" inline xfs_io -c "fadvise -d 0 1000" inline cat inline You'll get EIOs trying to read inline after this because add_extent_map is returning EEXIST Signed-off-by: Chris Mason <clm@fb.com>
2016-06-02f2fs: handle writepage correctlyJaegeuk Kim
Previously, f2fs_write_data_pages() calls __f2fs_writepage() which calls f2fs_write_data_page(). If f2fs_write_data_page() returns AOP_WRITEPAGE_ACTIVATE, __f2fs_writepage() calls mapping_set_error(). But, this should not happen at every time, since sometimes f2fs_write_data_page() tries to skip writing pages without error. For example, volatile_write() gives EIO all the time, as Shuoran Liu pointed out. Reported-by: Shuoran Liu <liushuoran@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: return error of f2fs_lookupJaegeuk Kim
Now we can report an error to f2fs_lookup given by f2fs_find_entry. Suggested-by: He YunLei <heyunlei@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: return the errno to the caller to avoid using a wrong pageYunlong Song
Commit aaf9607516ed38825268515ef4d773289a44f429 ("f2fs: check node page contents all the time") pointed out that "sometimes it was reported that its contents was missing", so it checks the page's mapping and contents. When "nid != nid_of_node(page)", ERR_PTR(-EIO) will be returned to the caller. However, commit e1c51b9f1df2f9efc2ec11488717e40cd12015f9 ("f2fs: clean up node page updating flow") moves "nid != nid_of_node(page)" test to "f2fs_bug_on(sbi, nid != nid_of_node(page))", this will return a wrong page to the caller when F2FS_CHECK_FS is off when "sometimes it was reported that its contents was missing" happens. This patch restores to check node page contents all the time, and returns the errno to make the caller known something is wrong and avoid to use the page. This patch also moves f2fs_bug_on to its proper location. Signed-off-by: Yunlong Song <yunlong.song@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: remove two steps to flush dirty data pagesJaegeuk Kim
If there is no cold page, we don't need to do a loop to flush dirty data pages. On /dev/pmem0, 1. dd if=/dev/zero of=/mnt/test/testfile bs=1M count=2048 conv=fsync Before : 1.1 GB/s After : 1.2 GB/s 2. dd if=/dev/zero of=/mnt/test/testfile bs=1M count=2048 Before : 2.2 GB/s After : 2.3 GB/s Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: do not skip writing data pagesJaegeuk Kim
For data pages, let's try to flush as much as possible in background. On /dev/pmem0, 1. dd if=/dev/zero of=/mnt/test/testfile bs=1M count=2048 conv=fsync Before : 800 MB/s After : 1.1 GB/s 2. dd if=/dev/zero of=/mnt/test/testfile bs=1M count=2048 Before : 1.3 GB/s After : 2.2 GB/s Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: inject to produce some orphan inodesJaegeuk Kim
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: propagate error given by f2fs_find_entryJaegeuk Kim
If we get ENOMEM or EIO in f2fs_find_entry, we should stop right away. Otherwise, for example, we can get duplicate directory entry by ->chash and ->clevel. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: remove writepages lockJaegeuk Kim
This patch removes writepages lock. We can improve multi-threading performance. tiobench, 32 threads, 4KB write per fsync on SSD Before: 25.88 MB/s After: 28.03 MB/s Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: set flush_merge by defaultJaegeuk Kim
This patch sets flush_merge by default. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: detect congestion of flush command issuesJaegeuk Kim
If flush commands do not incur any congestion, we don't need to throw that to dispatching queue which causes unnecessary latency. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: add lazytime mount optionJaegeuk Kim
This patch adds lazytime support. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: avoid unnecessary updating inode during fsyncJaegeuk Kim
If roll-forward recovery can recover i_size, we don't need to update inode's metadata during fsync. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: remove syncing inode page in all the casesJaegeuk Kim
This patch reduces to call them across the whole tree. - sync_inode_page() - update_inode_page() - update_inode() - f2fs_write_inode() Instead, checkpoint will flush all the dirty inode metadata before syncing node pages. Note that, this is doable, since we call mark_inode_dirty_sync() for all inode's field change which needs to update on-disk inode as well. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: flush inode metadata when checkpoint is doingJaegeuk Kim
This patch registers all the inodes which have dirty metadata to sync when checkpoint is doing. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: call mark_inode_dirty_sync for i_field changesJaegeuk Kim
This patch calls mark_inode_dirty_sync() for the following on-disk inode changes. -> largest -> ctime/mtime/atime -> i_current_depth -> i_xattr_nid -> i_pino -> i_advise -> i_flags -> i_mode Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: introduce f2fs_i_links_write with mark_inode_dirty_syncJaegeuk Kim
This patch introduces f2fs_i_links_write() to call mark_inode_dirty_sync() when changing inode->i_links. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: introduce f2fs_i_blocks_write with mark_inode_dirty_syncJaegeuk Kim
This patch introduces f2fs_i_blocks_write() to call mark_inode_dirty_sync() when changing inode->i_blocks. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: introduce f2fs_i_size_write with mark_inode_dirty_syncJaegeuk Kim
This patch introduces f2fs_i_size_write() to call mark_inode_dirty_sync() with i_size_write(). Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02f2fs: use inode pointer for {set, clear}_inode_flagJaegeuk Kim
This patch refactors to use inode pointer for set_inode_flag and clear_inode_flag. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2016-06-02Revert "f2fs: no need inc dirty pages under inode lock"Jaegeuk Kim
This reverts commit b951a4ec165af4973b2bd9c80fb5845fbd840435. Conflicts: fs/f2fs/checkpoint.c
2016-06-02pstore: drop file opened reference countGeliang Tang
In ee1d267423a1 ("pstore: add pstore unregister") I added: .owner = THIS_MODULE, in both pstore_fs_type and pstore_file_operations to increase a reference count when pstore filesystem is mounted and pstore file is opened. But, it's repetitive. There is no need to increase the opened reference count. We only need to increase the mounted reference count. When a file is opened, the filesystem can't be unmounted. Hence the pstore module can't be unloaded either. So I drop the opened reference count in this patch. Fixes: ee1d267423a1 ("pstore: add pstore unregister") Signed-off-by: Geliang Tang <geliangtang@163.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2016-06-02pstore: add lzo/lz4 compression supportGeliang Tang
Like zlib compression in pstore, this patch added lzo and lz4 compression support so that users can have more options and better compression ratio. The original code treats the compressed data together with the uncompressed ECC correction notice by using zlib decompress. The ECC correction notice is missing in the decompression process. The treatment also makes lzo and lz4 not working. So I treat them separately by using pstore_decompress() to treat the compressed data, and memcpy() to treat the uncompressed ECC correction notice. Signed-off-by: Geliang Tang <geliangtang@163.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2016-06-02Btrfs: self-tests: Support non-4k page sizeFeifei Xu
self-tests code assumes 4k as the sectorsize and nodesize. This commit fix hardcoded 4K. Enables the self-tests code to be executed on non-4k page sized systems (e.g. ppc64). Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Feifei Xu <xufeifei@linux.vnet.ibm.com> Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-06-02Btrfs: Fix integer overflow when calculating bytes_per_bitmapFeifei Xu
On ppc64, bytes_per_bitmap will be (65536*8*65536). Hence append UL to fix integer overflow. Reviewed-by: Josef Bacik <jbacik@fb.com> Reviewed-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: Feifei Xu <xufeifei@linux.vnet.ibm.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-06-02Btrfs: test_check_exists: Fix infinite loop when searching for free space ↵Feifei Xu
entries On a ppc64 machine using 64K as the block size, assume that the RB tree at btrfs_free_space_ctl->free_space_offset contains following two entries: 1. A bitmap entry having an offset value of 0 and having the bits corresponding to the address range [128M+512K, 128M+768K] set. 2. An extent entry corresponding to the address range [128M-256K, 128M-128K] In such a scenario, test_check_exists() invoked for checking the existence of address range [128M+768K, 256M] can lead to an infinite loop as explained below: - Checking for the extent entry fails. - Checking for a bitmap entry results in the free space info in range [128M+512K, 128M+768K] beng returned. - rb_prev(info) returns NULL because the bitmap entry starting from offset 0 comes first in the RB tree. - current_node = bitmap node. - while (current_node) tmp = rb_next(bitmap_node);/*tmp is extent based free space entry*/ Since extent based free space entry's last address is smaller than the address being searched for (i.e. 128M+768K) we incorrectly again obtain the extent node as the "next right node" of the RB tree and thus end up looping infinitely. This patch fixes the issue by checking the "tmp" variable which point to the most recently searched free space node. Reviewed-by: Josef Bacik <jbacik@fb.com> Reviewed-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Signed-off-by: Feifei Xu <xufeifei@linux.vnet.ibm.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-06-01ceph: use i_version to check validity of fscacheYan, Zheng
Signed-off-by: Yan, Zheng <zyan@redhat.com>
2016-06-01ceph: improve fscache revalidationYan, Zheng
There are several issues in fscache revalidation code. - In ceph_revalidate_work(), fscache_invalidate() is called when fscache_check_consistency() return 0. This is complete wrong because 0 means cache is valid. - Handle_cap_grant() calls ceph_queue_revalidate() if client already has CAP_FILE_CACHE. This code is confusing. Client should revalidate the cache each time it got CAP_FILE_CACHE anew. - In Handle_cap_grant(), fscache_invalidate() is called if MDS revokes CAP_FILE_CACHE. This is inconsistency with the case that inode get evicted. In the later case, the cache is not discarded. Client may use the cache when inode is reloaded. This patch moves the fscache revalidation into ceph_get_caps(). Client revalidates the cache after it gets CAP_FILE_CACHE. i_rdcache_gen should keep constance while CAP_FILE_CACHE is used. If i_fscache_gen is not equal to i_rdcache_gen, client needs to check cache's consistency. Signed-off-by: Yan, Zheng <zyan@redhat.com>
2016-06-01ceph: disable fscache when inode is opened for writeYan, Zheng
All other filesystems do not add dirty pages to fscache. They all disable fscache when inode is opened for write. Only ceph adds dirty pages to fscache, but the code is buggy. Signed-off-by: Yan, Zheng <zyan@redhat.com>
2016-06-01ceph: avoid unnecessary fscache invalidation/revlidationYan, Zheng
ceph_fill_file_size() has already called ceph_fscache_invalidate() if it return true. Signed-off-by: Yan, Zheng <zyan@redhat.com>
2016-06-01ceph: call __fscache_uncache_page() if readpages failsYan, Zheng
If readpages fails, fscache needs to cleanup its internal state. Signed-off-by: Yan, Zheng <zyan@redhat.com>
2016-06-01FS-Cache: make check_consistency callback return intYan, Zheng
__fscache_check_consistency() calls check_consistency() callback and return the callback's return value. But the return type of check_consistency() is bool. So __fscache_check_consistency() return 1 if the cache is inconsistent. This is inconsistent with the document. Signed-off-by: Yan, Zheng <zyan@redhat.com> Acked-by: David Howells <dhowells@redhat.com>
2016-06-01FS-Cache: wake write waiter after invalidating writesYan, Zheng
Signed-off-by: Yan, Zheng <zyan@redhat.com> Acked-by: David Howells <dhowells@redhat.com>
2016-06-01xfs: reduce lock hold times in buffer writebackDave Chinner
When we have a lot of metadata to flush from the AIL, the buffer list can get very long. The current submission code tries to batch submission to optimise IO order of the metadata (i.e. ascending block order) to maximise block layer merging or IO to adjacent metadata blocks. Unfortunately, the method used can result in long lock times occurring as buffers locked early on in the buffer list might not be dispatched until the end of the IO licst processing. This is because sorting does not occur util after the buffer list has been processed and the buffers that are going to be submitted are locked. Hence when the buffer list is several thousand buffers long, the lock hold times before IO dispatch can be significant. To fix this, sort the buffer list before we start trying to lock and submit buffers. This means we can now submit buffers immediately after they are locked, allowing merging to occur immediately on the plug and dispatch to occur as quickly as possible. This means there is minimal delay between locking the buffer and IO submission occuring, hence reducing the worst case lock hold times seen during delayed write buffer IO submission signficantly. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2016-06-01xfs: define XFS_IOC_FREEZE even if FIFREEZE is definedChristoph Hellwig
And the same for XFS_IOC_THAW. Just because we now have a common version of the ioctl we still need to provide the old name for it for anyone using those. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2016-06-01xfs: make several functions staticEric Sandeen
Al Viro noticed that xfs_lock_inodes should be static, and that led to ... a few more. These are just the easy ones, others require moving functions higher in source files, so that's not done here to keep this review simple. Signed-off-by: Eric Sandeen <sandeen@sandeen.net> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2016-06-01xfs: remove spurious shutdown type check from xfs_bmap_finish()Brian Foster
The static checker reports that after commit 8d99fe92fed0 ("xfs: fix efi/efd error handling to avoid fs shutdown hangs"), the code has been reworked such that error == -EFSCORRUPTED is not possible in this codepath. Remove the spurious error check and just use SHUTDOWN_META_IO_ERROR unconditionally. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
2016-06-01xfs: fix broken multi-fsb buffer loggingBrian Foster
Multi-block buffers are logged based on buffer offset in xfs_trans_log_buf(). xfs_buf_item_log() ultimately walks each mapping in the buffer and marks the associated range to be logged in the xfs_buf_log_format bitmap for that mapping. This code is broken, however, in that it marks the actual buffer offsets of the associated range in each bitmap rather than shifting to the byte range for that particular mapping. For example, on a 4k fsb fs, buffer offset 4096 refers to the first byte of the second mapping in the buffer. This means byte 0 of the second log format bitmap should be tagged as dirty. Instead, the current code marks byte offset 4096 of the second log format bitmap, which is invalid and potentially out of range of the mapping. As a result of this, the log item format code invoked at transaction commit time is not be able to correctly identify what parts of the buffer to copy into log vectors. This can lead to NULL log vector pointer dereferences in CIL push context if the item format code was not able to locate any dirty ranges at all. This crash has been reproduced on a 4k FSB filesystem using 16k directory blocks where an unlink operation happened not to log anything in the first block of the mapping. The logged offsets were all over 4k, marked as such in the subsequent log format mappings, and thus left the transaction with an xfs_log_item that is marked DIRTY but without any logged regions. Further, even when the logged regions are marked correctly in the buffer log format bitmaps, the format code doesn't copy the correct ranges of the buffer into the log. This means that any logged region beyond the first block of a multi-block buffer is subject to corruption after a crash and log recovery sequence. This is due to a failure to convert the mapping bm_len field from basic blocks to bytes in the buffer offset tracking code in xfs_buf_item_format(). Update xfs_buf_item_log() to convert buffer offsets to segment relative offsets when logging multi-block buffers. This ensures that the modified regions of a buffer are logged correctly and avoids the aforementioned crash. Also update xfs_buf_item_format() to correctly track the source offset into the buffer for the log vector formatting code. This ensures that the correct data is copied into the log. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2016-06-01freevxfs: update documentation and cresdits for HP-UX supportKrzysztof Błaszkowski
Signed-off-by: Krzysztof Błaszkowski <kb@sysmikro.com.pl> [hch: cosmetic updates] Signed-off-by: Christoph Hellwig <hch@lst.de>
2016-06-01freevxfs: implement ->alloc_inode and ->destroy_inodeChristoph Hellwig
This driver predates those methods and was trying to be clever allocating it's own private data. Switch to the generic scheme used by other file systems. Based on an earlier patch from Krzysztof Błaszkowski <kb@sysmikro.com.pl>. Signed-off-by: Christoph Hellwig <hch@lst.de>
2016-06-01freevxfs: avoid the need for forward declaring the super operationsChristoph Hellwig
Signed-off-by: Christoph Hellwig <hch@lst.de>
2016-06-01freevxfs: move VFS inode allocation into vxfs_blkiget and vxfs_stigetKrzysztof Błaszkowski
Signed-off-by: Krzysztof Błaszkowski <kb@sysmikro.com.pl> [hch: split from a larger patch] Signed-off-by: Christoph Hellwig <hch@lst.de>
2016-06-01freevxfs: remove vxfs_put_fake_inodeKrzysztof Błaszkowski
Signed-off-by: Krzysztof Błaszkowski <kb@sysmikro.com.pl> [hch: split from a larget patch] Signed-off-by: Christoph Hellwig <hch@lst.de>
2016-06-01freevxfs: handle big endian HP-UX file systemsKrzysztof Błaszkowski
To support VxFS filesystems from HP-UX on x86 systems we need to implement byte swapping, and to keep support for Unixware filesystems it needs to be the complicated dual-endian kind ala sysvfs. To do this properly we have to split the on disk and in-core inode so that we can keep the in-core one in native endianness. All other structures are byteswapped on demand. Signed-off-by: Krzysztof Błaszkowski <kb@sysmikro.com.pl> [hch: make spare happy] Signed-off-by: Christoph Hellwig <hch@lst.de>
2016-06-01Btrfs: end transaction if we abort when creating uuid rootJosef Bacik
We still need to call btrfs_end_transaction if we call btrfs_abort_transaction, otherwise we hang and make me super grumpy. Thanks, Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
2016-05-31pstore: Cleanup pstore_dump()Namhyung Kim
The code is duplicate between compression is enabled or not. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Kees Cook <keescook@chromium.org>
2016-05-31pstore: Enable compression on normal path (again)Namhyung Kim
The commit f0e2efcfd2717 ("pstore: do not use message compression without lock") added a check to 'is_locked' to avoid breakage in concurrent accesses. But it has a side-effect of disabling compression on normal path since 'is_locked' variable is not set. As normal path always takes the lock, it should be initialized to 1. This also makes the unlock code a bit simpler. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Kees Cook <keescook@chromium.org>