summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2017-11-06xfs: remove the nr_extents argument to xfs_iext_insertChristoph Hellwig
We only have two places that insert 2 extents at the same time, so unroll the loop there. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: use a b+tree for the in-core extent listChristoph Hellwig
Replace the current linear list and the indirection array for the in-core extent list with a b+tree to avoid the need for larger memory allocations for the indirection array when lots of extents are present. The current extent list implementations leads to heavy pressure on the memory allocator when modifying files with a high extent count, and can lead to high latencies because of that. The replacement is a b+tree with a few quirks. The leaf nodes directly store the extent record in two u64 values. The encoding is a little bit different from the existing in-core extent records so that the start offset and length which are required for lookups can be retreived with simple mask operations. The inner nodes store a 64-bit key containing the start offset in the first half of the node, and the pointers to the next lower level in the second half. In either case we walk the node from the beginninig to the end and do a linear search, as that is more efficient for the low number of cache lines touched during a search (2 for the inner nodes, 4 for the leaf nodes) than a binary search. We store termination markers (zero length for the leaf nodes, an otherwise impossible high bit for the inner nodes) to terminate the key list / records instead of storing a count to use the available cache lines as efficiently as possible. One quirk of the algorithm is that while we normally split a node half and half like usual btree implementations we just spill over entries added at the very end of the list to a new node on its own. This means we get a 100% fill grade for the common cases of bulk insertion when reading an inode into memory, and when only sequentially appending to a file. The downside is a slightly higher chance of splits on the first random insertions. Both insert and removal manually recurse into the lower levels, but the bulk deletion of the whole tree is still implemented as a recursive function call, although one limited by the overall depth and with very little stack usage in every iteration. For the first few extents we dynamically grow the list from a single extent to the next powers of two until we have a first full leaf block and that building the actual tree. The code started out based on the generic lib/btree.c code from Joern Engel based on earlier work from Peter Zijlstra, but has since been rewritten beyond recognition. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: allow unaligned extent records in xfs_bmbt_disk_set_allChristoph Hellwig
To make life a little simpler make xfs_bmbt_set_all unaligned access aware so that we can use it directly on the destination buffer. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: remove support for inlining data/extents into the inode forkChristoph Hellwig
Supporting a small bit of data inside the inode fork blows up the fork size a lot, removing the 32 bytes of inline data halves the effective size of the inode fork (and it still has a lot of unused padding left), and the performance of a single kmalloc doesn't show up compared to the size to read an inode or create one. It also simplifies the fork management code a lot. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: simplify xfs_reflink_convert_cowChristoph Hellwig
Instead of looking up extents to convert and calling xfs_bmapi_write on each of them just let xfs_bmapi_write handle the full range. To make this robust add a new XFS_BMAPI_CONVERT_ONLY that only converts ranges and never allocates blocks. [darrick: shorten the stringified CONVERT_ONLY trace flag] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: iterate backwards in xfs_reflink_cancel_cow_blocksChristoph Hellwig
Match the iteration order for extent deletion in the truncate and reflink I/O completion path. This also happens to make implementing the new incore extent list a lot easier. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: introduce the xfs_iext_cursor abstractionChristoph Hellwig
Add a new xfs_iext_cursor structure to hide the direct extent map index manipulations. In addition to the existing lookup/get/insert/ remove and update routines new primitives to get the first and last extent cursor, as well as moving up and down by one extent are provided. Also new are convenience to increment/decrement the cursor and retreive the new extent, as well as to peek into the previous/next extent without updating the cursor and last but not least a macro to iterate over all extents in a fork. [darrick: rename for_each_iext to for_each_xfs_iext] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: iterate over extents in xfs_bmap_extents_to_btreeChristoph Hellwig
This actually makes the function very slightly less efficient for now as we detour through the expanded irect format between the in-core extent format and the on-disk one instead of just endian swapping them. But with the incore extent btree the in-core one will use a different format and the representation will be entirely hidden. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: iterate over extents in xfs_iextents_copyChristoph Hellwig
This actually makes the function very slightly less efficient for now as we detour through the expanded irect format between the in-core extent format and the on-disk one instead of just endian swapping them. But with the incore extent btree the in-core one will use a different format and the representation will be entirely hidden. It also happens to make the function a whole more readable. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: pass an on-disk extent to xfs_bmbt_validate_extentChristoph Hellwig
This prepares for getting rid of the current in-memory extent format. At the end of the series we will change the calling convention again to pass the xfs_bmbt_irec structure once it is available everywhere. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: treat idx as a cursor in xfs_bmap_collapse_extentsChristoph Hellwig
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: treat idx as a cursor in xfs_bmap_del_extent_*Christoph Hellwig
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: treat idx as a cursor in xfs_bmap_add_extent_unwritten_realChristoph Hellwig
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: treat idx as a cursor in xfs_bmap_add_extent_hole_realChristoph Hellwig
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: treat idx as a cursor in xfs_bmap_add_extent_hole_delayChristoph Hellwig
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: treat idx as a cursor in xfs_bmap_add_extent_delay_realChristoph Hellwig
Stop poking before and after the index and just increment or decrement it while doing our operations on it to prepare for a new extent list implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: remove a duplicate assignment in xfs_bmap_add_extent_delay_realChristoph Hellwig
Reported-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06xfs: don't create overlapping extents in xfs_bmap_add_extent_delay_realChristoph Hellwig
Two cases in xfs_bmap_add_extent_delay_real currently insert a new extent before updating the existing one that is being split. While this works fine with a simple extent list, a more complex tree can't easily cope with overlapping extent. Reshuffle the code a bit to update the slot of the existing delalloc extent to the new real extent before inserting the shortened delalloc extent before or after it. This avoids the overlapping extents while still allowing to update the br_startblock field of the delalloc extent with the updated indirect block reservation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2017-11-06ecryptfs: remove unnecessary i_version bumpJeff Layton
There is no need to bump the i_version counter here, as ecryptfs does not set the SB_I_VERSION flag, and doesn't use it internally. It also only bumps it when the inode is instantiated, which doesn't make much sense. Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
2017-11-06ecryptfs: use ARRAY_SIZEJérémy Lefaure
Using the ARRAY_SIZE macro improves the readability of the code. Found with Coccinelle with the following semantic patch: @r depends on (org || report)@ type T; T[] E; position p; @@ ( (sizeof(E)@p /sizeof(*E)) | (sizeof(E)@p /sizeof(E[...])) | (sizeof(E)@p /sizeof(T)) ) Signed-off-by: Jérémy Lefaure <jeremy.lefaure@lse.epita.fr> Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
2017-11-06ecryptfs: Adjust four checks for null pointersMarkus Elfring
The script “checkpatch.pl” pointed information out like the following. Comparison to NULL could be written … Thus fix the affected source code places. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
2017-11-06ecryptfs: Return an error code only as a constant in ↵Markus Elfring
ecryptfs_add_global_auth_tok() * Return an error code without storing it in an intermediate variable. * Delete the jump target "out" and the local variable "rc" which became unnecessary with this refactoring. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
2017-11-06ecryptfs: Delete 21 error messages for a failed memory allocationMarkus Elfring
Omit extra messages for a memory allocation failure in these functions. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
2017-11-06eCryptfs: use after free in ecryptfs_release_messaging()Dan Carpenter
We're freeing the list iterator so we should be using the _safe() version of hlist_for_each_entry(). Fixes: 88b4a07e6610 ("[PATCH] eCryptfs: Public key transport mechanism") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
2017-11-05f2fs: avoid race in between GC and block exchangeChao Yu
During block exchange in {insert,collapse,move}_range, page-block mapping is unstable due to mapping moving or recovery, so there should be no concurrent cache read operation rely on such mapping, nor cache write operation to mess up block exchange. So this patch let background GC be aware of that. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: save a multiplication for last_nid calculationFan Li
Use a slightly easier way to calculate last_nid. Signed-off-by: Fan li <fanofcode.li@samsung.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: fix summary info corruptionChao Yu
Sometimes, after running generic/270 of fstest, fsck reports summary info and actual position of block address in direct node becoming inconsistent. The root cause is race in between __f2fs_replace_block and change_curseg as below: Thread A Thread B - __clone_blkaddrs - f2fs_replace_block - __f2fs_replace_block - segnoA = GET_SEGNO(sbi, blkaddrA); - type = se->type:=CURSEG_HOT_DATA - if (!IS_CURSEG(sbi, segnoA)) type = CURSEG_WARM_DATA - allocate_data_block - allocate_segment - get_ssr_segment - change_curseg(segnoA, CURSEG_HOT_DATA) - change_curseg(segnoA, CURSEG_WARM_DATA) - reset_curseg - __set_sit_entry_type - change se->type from CURSEG_HOT_DATA to CURSEG_WARM_DATA So finally, hot curseg locates in segnoA, but type of segnoA becomes CURSEG_WARM_DATA. Then if we invoke __f2fs_replace_block(blkaddrB, blkaddrA, true, false), as blkaddrA locates in segnoA, so we will move warm type curseg to segnoA, then change its summary cache and writeback it to summary block. But segnoA is used by hot type curseg too, once it moves or persist, it will cover summary block content with inner old summary cache, result in inconsistent status. This patch tries to fix this issue by introduce global curseg lock to avoid race in between __f2fs_replace_block and change_curseg. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: remove dead code in update_meta_pageChao Yu
After commit a468f0ef516f ("f2fs: use crc and cp version to determine roll-forward recovery"), last caller of update_meta_page passing @src with NULL is gone, so remove related dead code there. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: remove unneeded semicolonChao Yu
Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: don't bother with inode->i_versionJeff Layton
f2fs does not set the SB_I_VERSION flag, so the i_version will never be incremented on write. It was recently changed to increment the i_version on a quota write, which isn't necessary here. Signed-off-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: check curseg space before foreground GCChao Yu
When we are closing to trigger foreground GC, if there are only a few of dirty metas, we can log these dirty metas in left space of opened segments instead of triggering foreground GC. With this patch, total count of foreground GC triggered by test/generic/* of fstest suit reduce from 254 to 184. So let's do the check before foreground GC anyway. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: use rw_semaphore to protect SIT cacheChao Yu
There are some cases user didn't update SIT cache under this lock, so let's use rw_semaphore instead of mutex to enhance concurrently accessing. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: support quota sys filesJaegeuk Kim
This patch supports hidden quota files in the system, which will be used for Android. It requires up-to-date f2fs-tools later than v1.9.0. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: add quota_ino feature infraJaegeuk Kim
This patch adds quota_ino feature infra to be used for quota files. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: optimize __update_nat_bitsFan Li
Make three modification for __update_nat_bits: 1. Take the codes of dealing the nat with nid 0 out of the loop Such nat only needs to be dealt with once at beginning. 2. Use " nat_index == 0" instead of " start_nid == 0" to decide if it's the first nat block It's better that we don't assume @start_nid is the first nid of the nat block it's in. 3. Use " if (nat_blk->entries[i].block_addr != NULL_ADDR)" to explicitly comfirm the value of block_addr use constant to make sure the codes is right, even if the value of NULL_ADDR changes. Signed-off-by: Fan li <fanofcode.li@samsung.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: modify for accurate fggc node io statYunlei He
modify for accurate fggc node io stat Signed-off-by: Yunlei He <heyunlei@huawei.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05Revert "f2fs: handle dirty segments inside refresh_sit_entry"Yunlong Song
This reverts commit 5e443818fa0b2a2845561ee25bec181424fb2889 The commit should be reverted because call sequence of below two parts of code must be kept: a. update sit information, it needs to be updated before segment allocation since latter allocation may trigger SSR, and SSR allocation needs latest valid block information of all segments. b. update segment status, it needs to be updated after segment allocation since we can skip updating current opened segment status. Fixes: 5e443818fa0b ("f2fs: handle dirty segments inside refresh_sit_entry") Suggested-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Yunlong Song <yunlong.song@huawei.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> [Jaegeuk Kim: remove refresh_sit_entry function] Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: add a function to move nidFan Li
This patch add a new function to move nid from one state to another. Move operation is heavily used, by adding a new function for it we can cut down some branches from several flow. Signed-off-by: Fan li <fanofcode.li@samsung.com> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: export SSR allocation thresholdChao Yu
This patch exports min_ssr_segments threshold in sysfs to let user control triggering SSR allocation flexibly. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: give correct trimmed blocks in fstrimChao Yu
We have supported to issue discard in specified range during fstrim, it needs to return caller with successfully trimmed bytes in that range instead of bytes of invalid blocks which are scanned in checkpoint. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: support bio allocation error injectionChao Yu
This patch adds to support bio allocation error injection to simulate out-of-memory test scenario. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: support get_page error injectionChao Yu
This patch adds to support get_page error injection to simulate out-of-memory test scenario. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: support soft block reservationYunlong Song
It supports to extend reserved_blocks sysfs interface to be soft threshold, which allows user configure it exceeding current available user space. This patch also introduces a new sysfs interface called current_reserved_blocks, which shows the current blocks that have already been reserved. Signed-off-by: Yunlong Song <yunlong.song@huawei.com> Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: handle error case when adding xattr entryJaegeuk Kim
This patch fixes recovering incomplete xattr entries remaining in inline xattr and xattr block, caused by any kind of errors. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: support flexible inline xattr sizeChao Yu
Now, in product, more and more features based on file encryption were introduced, their demand of xattr space is increasing, however, inline xattr has fixed-size of 200 bytes, once inline xattr space is full, new increased xattr data would occupy additional xattr block which may bring us more space usage and performance regression during persisting. In order to resolve above issue, it's better to expand inline xattr size flexibly according to user's requirement. So this patch introduces new filesystem feature 'flexible inline xattr', and new mount option 'inline_xattr_size=%u', once mkfs enables the feature, we can use the option to make f2fs supporting flexible inline xattr size. To support this feature, we add extra attribute i_inline_xattr_size in inode layout, indicating that how many space inline xattr borrows from block address mapping space in inode layout, by this, we can easily locate and store flexible-sized inline xattr data in inode. Inode disk layout: +----------------------+ | .i_mode | | ... | | .i_ext | +----------------------+ | .i_extra_isize | | .i_inline_xattr_size |-----------+ | ... | | +----------------------+ | | .i_addr | | | - block address or | | | - inline data | | +----------------------+<---+ v | inline xattr | +---inline xattr range +----------------------+<---+ | .i_nid | +----------------------+ | node_footer | | (nid, ino, offset) | +----------------------+ Note that, we have to cnosider backward compatibility which reserved inline_data space, 200 bytes, all the time, reported by Sheng Yong. Previous inline data or directory always reserved 200 bytes in inode layout, even if inline_xattr is disabled. In order to keep inline_dentry's structure for backward compatibility, we get the space back only from inline_data. Signed-off-by: Chao Yu <yuchao0@huawei.com> Reported-by: Sheng Yong <shengyong1@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: show current cp stateJaegeuk Kim
This patch shows whether checkpoint met any error case. Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: add missing quota_initializeJaegeuk Kim
This patch adds to call quota_intialize in f2fs_set_acl, f2fs_unlink, and f2fs_rename. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: show # of dirty segments via sysfsJaegeuk Kim
This patch adds one sysfs entry to show # of dirty segments which can be used for gc timing by user. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05f2fs: stop all the operations by cp_error flagJaegeuk Kim
This patch replaces to use cp_error flag instead of RDONLY for quota off. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2017-11-05vfs: grab the lock instead of blocking in __fd_install during resizingMateusz Guzik
Explicit locking in the fallback case provides a safe state of the table. Getting rid of blocking semantics makes __fd_install usable again in non-sleepable contexts, which easies backporting efforts. There is a side effect of slightly nicer assembly for the common case as might_sleep can now be removed. Signed-off-by: Mateusz Guzik <mguzik@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>