summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2020-05-25btrfs: backref: rename and move finish_upper_links()Qu Wenruo
This the the 2nd major part of generic backref cache. Move it to backref.c so we can reuse it. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move handle_one_tree_block()Qu Wenruo
This function is the major part of backref cache build process, move it to backref.c so we can reuse it later. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: open code read_fs_root() for handle_indirect_tree_backref()Qu Wenruo
The backref code is going to be moved to backref.c, and read_fs_root() is just a simple wrapper, open-code it to prepare to the incoming code move. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move should_ignore_root()Qu Wenruo
This function is mostly single purpose to relocation backref cache, but since we're moving the main part of backref cache to backref.c, we need to export such function. And to avoid confusion, rename the function to btrfs_should_ignore_reloc_root() make the name a little more clear. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move backref_tree_panic()Qu Wenruo
Also change the parameter, since all callers can easily grab an fs_info, there is no need for all the pointer chasing. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move backref_cache_cleanup()Qu Wenruo
Since we're releasing all existing nodes/edges, other than cleanup the mess after error, "release" is a more proper naming here. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move remove_backref_node()Qu Wenruo
Also add comment explaining the cleanup progress, to differ it from btrfs_backref_drop_node(). Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move drop_backref_node()Qu Wenruo
With extra comment for drop_backref_node() as it has some similarity with remove_backref_node(), thus we need extra comment explaining the difference. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move free_backref_(node|edge)Qu Wenruo
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move link_backref_edge()Qu Wenruo
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move alloc_backref_edge()Qu Wenruo
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move alloc_backref_node()Qu Wenruo
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: rename and move backref_cache_init()Qu Wenruo
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: rename tree_entry to rb_simple_node and export itQu Wenruo
Structure tree_entry provides a very simple rb_tree which only uses bytenr as search index. That tree_entry is used in 3 structures: backref_node, mapping_node and tree_block. Since we're going to make backref_node independnt from relocation, it's a good time to extract the tree_entry into rb_simple_node, and export it into misc.h. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: move btrfs_backref_(node|edge|cache) structures to backref.hQu Wenruo
These 3 structures are the main part of btrfs backref cache, move them to backref.h to build the basis for later reuse. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: add btrfs_ prefix for backref_node/edge/cacheQu Wenruo
Those three structures are the main elements of backref cache. Add the "btrfs_" prefix for later export. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: refactor useless nodes handling into its own functionQu Wenruo
This patch will also add some comment for the cleanup. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: refactor finishing part of upper linkage into finish_upper_links()Qu Wenruo
After handle_one_tree_backref(), all newly added (not cached) edges and nodes have the following features: - Only backref_edge::list[LOWER] is linked. This means, we can only iterate from botton to top, not the other direction. - Newly added nodes are not added to cache rb_tree yet So to finish the backref cache, we still need to finish the links and add all nodes into backref cache rb_tree. This patch will refactor the existing code into finish_upper_links(), add more comments of each branch, and why we need to do all the work. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: remove the open-coded goto loop for breadth-first searchQu Wenruo
build_backref_tree() uses "goto again;" to implement a breadth-first search to build backref cache. This patch will extract most of its work into a wrapper, handle_one_tree_block(), and use a do {} while() loop to implement the same thing. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: pass essential members for alloc_backref_node()Qu Wenruo
Bytenr and level are essential parameters for backref_node, thus it makes sense to initialize them at allocation time. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: use wrapper to replace open-coded edge linkingQu Wenruo
Since backref_edge is used to connect upper and lower backref nodes, and needs to access both nodes, some code can look pretty nasty: list_add_tail(&edge->list[LOWER], &cur->upper); The above code will link @cur to the LOWER side of the edge, while both "LOWER" and "upper" words show up. This can sometimes be very confusing for reader to grasp. This patch introduces a new wrapper, link_backref_edge(), to handle the linking behavior. Which also has extra ASSERT() to ensure caller won't pass wrong nodes. Also, this updates the comment of related lists of backref_node and backref_edge, to make it more clear that each list points to what. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: refactor indirect tree backref processing into its own functionQu Wenruo
The processing of indirect tree backref (TREE_BLOCK_REF) is the most complex work. We need to grab the fs root, do a tree search to locate all its parent nodes, link all needed edges, and put all uncached edges to pending edge list. This is definitely worth a helper function. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: refactor direct tree backref processing into its own functionQu Wenruo
For BTRFS_SHARED_BLOCK_REF_KEY, its processing is straightforward, as we now the parent node bytenr directly. If the parent is already cached, or a root, call it a day. If the parent is not cached, add it pending list. This patch will just refactor this part into its own function, handle_direct_tree_backref() and add some comment explaining the @ref_key parameter. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: make reloc root search-specific for relocation backref cacheQu Wenruo
find_reloc_root() searches reloc_control::reloc_root_tree to find the reloc root. This behavior is only useful for relocation backref cache. For the incoming more generic purpose backref cache, we don't care about who owns the reloc root, but only care if it's a reloc root. So this patch makes the following modifications to make the reloc root search more specific to relocation backref: - Add backref_node::is_reloc_root This will be an extra indicator for generic purposed backref cache. User doesn't need to read root key from backref_node::root to determine if it's a reloc root. Also for reloc tree root, it's useless and will be queued to useless list. - Add backref_cache::is_reloc This will allow backref cache code to do different behavior for generic purpose backref cache and relocation backref cache. - Pass fs_info to find_reloc_root() - Export find_reloc_root() So backref.c can utilize this function. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: add backref_cache::fs_info memberQu Wenruo
Add this member so that we can grab fs_info without the help from reloc_control. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: add backref_cache::pending_edge and backref_cache::useless_nodeQu Wenruo
These two new members will act the same as the existing local lists, @useless and @list in build_backref_tree(). Currently build_backref_tree() is only executed serially, thus moving such local list into backref_cache is still safe. Also since we're here, use list_first_entry() to replace a lot of list_entry() calls after !list_empty(). Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: rename mark_block_processed and __mark_block_processedQu Wenruo
These two functions are weirdly named, mark_block_processed() in fact just marks a range dirty unconditionally, while __mark_block_processed() does extra check before doing the marking. This patch will open code old mark_block_processed, and rename __mark_block_processed() to remove the "__" prefix. Since we're here, also kill the forward declaration, which could also kill in_block_group() with in_range() macro. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: reloc: use btrfs_backref_iter infrastructureQu Wenruo
In the core function of relocation, build_backref_tree, it needs to iterate all backref items of one tree block. Use btrfs_backref_iter infrastructure to do the loop and make the code more readable. The backref items look would be much more easier to read: ret = btrfs_backref_iter_start(iter, cur->bytenr); for (; ret == 0; ret = btrfs_backref_iter_next(iter)) { /* The really important work */ } Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: implement btrfs_backref_iter_next()Qu Wenruo
This function will go to the next inline/keyed backref for btrfs_backref_iter infrastructure. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: backref: introduce the skeleton of btrfs_backref_iterQu Wenruo
Due to the complex nature of btrfs extent tree, when we want to iterate all backrefs of one extent, this involves quite a lot of work, like searching the EXTENT_ITEM/METADATA_ITEM, iteration through inline and keyed backrefs. Normally this would result in a complex code, something like: btrfs_search_slot() /* Ensure we are at EXTENT_ITEM/METADATA_ITEM */ while (1) { /* Loop for extent tree items */ while (ptr < end) { /* Loop for inlined items */ /* Real work here */ } next: ret = btrfs_next_item() /* Ensure we're still at keyed item for specified bytenr */ } The idea of btrfs_backref_iter is to avoid such complex and hard to read code structure, but something like the following: iter = btrfs_backref_iter_alloc(); ret = btrfs_backref_iter_start(iter, bytenr); if (ret < 0) goto out; for (; ; ret = btrfs_backref_iter_next(iter)) { /* Real work here */ } out: btrfs_backref_iter_free(iter); This patch is just the skeleton + btrfs_backref_iter_start() code. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: add missing annotation for btrfs_tree_lock()Jules Irenge
Sparse reports a warning at btrfs_tree_lock() warning: context imbalance in btrfs_tree_lock() - wrong count at exit The root cause is the missing annotation at btrfs_tree_lock() Add the missing __acquires(&eb->lock) annotation Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-25btrfs: add missing annotation for btrfs_lock_cluster()Jules Irenge
Sparse reports a warning at btrfs_lock_cluster() warning: context imbalance in btrfs_lock_cluster() - wrong count The root cause is the missing annotation at btrfs_lock_cluster() Add the missing __acquires(&cluster->refill_lock) annotation. Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-05-24Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller
The MSCC bug fix in 'net' had to be slightly adjusted because the register accesses are done slightly differently in net-next. Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-23Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds
Pull networking fixes from David Miller: 1) Fix RCU warnings in ipv6 multicast router code, from Madhuparna Bhowmik. 2) Nexthop attributes aren't being checked properly because of mis-initialized iterator, from David Ahern. 3) Revert iop_idents_reserve() change as it caused performance regressions and was just working around what is really a UBSAN bug in the compiler. From Yuqi Jin. 4) Read MAC address properly from ROM in bmac driver (double iteration proceeds past end of address array), from Jeremy Kerr. 5) Add Microsoft Surface device IDs to r8152, from Marc Payne. 6) Prevent reference to freed SKB in __netif_receive_skb_core(), from Boris Sukholitko. 7) Fix ACK discard behavior in rxrpc, from David Howells. 8) Preserve flow hash across packet scrubbing in wireguard, from Jason A. Donenfeld. 9) Cap option length properly for SO_BINDTODEVICE in AX25, from Eric Dumazet. 10) Fix encryption error checking in kTLS code, from Vadim Fedorenko. 11) Missing BPF prog ref release in flow dissector, from Jakub Sitnicki. 12) dst_cache must be used with BH disabled in tipc, from Eric Dumazet. 13) Fix use after free in mlxsw driver, from Jiri Pirko. 14) Order kTLS key destruction properly in mlx5 driver, from Tariq Toukan. 15) Check devm_platform_ioremap_resource() return value properly in several drivers, from Tiezhu Yang. * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (71 commits) net: smsc911x: Fix runtime PM imbalance on error net/mlx4_core: fix a memory leak bug. net: ethernet: ti: cpsw: fix ASSERT_RTNL() warning during suspend net: phy: mscc: fix initialization of the MACsec protocol mode net: stmmac: don't attach interface until resume finishes net: Fix return value about devm_platform_ioremap_resource() net/mlx5: Fix error flow in case of function_setup failure net/mlx5e: CT: Correctly get flow rule net/mlx5e: Update netdev txq on completions during closure net/mlx5: Annotate mutex destroy for root ns net/mlx5: Don't maintain a case of del_sw_func being null net/mlx5: Fix cleaning unmanaged flow tables net/mlx5: Fix memory leak in mlx5_events_init net/mlx5e: Fix inner tirs handling net/mlx5e: kTLS, Destroy key object after destroying the TIS net/mlx5e: Fix allowed tc redirect merged eswitch offload cases net/mlx5: Avoid processing commands before cmdif is ready net/mlx5: Fix a race when moving command interface to events mode net/mlx5: Add command entry handling completion rxrpc: Fix a memory leak in rxkad_verify_response() ...
2020-05-23rxrpc: Fix a warningDavid Howells
Fix a warning due to an uninitialised variable. le included from ../fs/afs/fs_probe.c:11: ../fs/afs/fs_probe.c: In function 'afs_fileserver_probe_result': ../fs/afs/internal.h:1453:2: warning: 'rtt_us' may be used uninitialized in this function [-Wmaybe-uninitialized] 1453 | printk("[%-6.6s] "FMT"\n", current->comm ,##__VA_ARGS__) | ^~~~~~ ../fs/afs/fs_probe.c:35:15: note: 'rtt_us' was declared here Signed-off-by: David Howells <dhowells@redhat.com>
2020-05-22Merge tag 'rxrpc-fixes-20200520' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs David Howells says: ==================== rxrpc: Fix retransmission timeout and ACK discard Here are a couple of fixes and an extra tracepoint for AF_RXRPC: (1) Calculate the RTO pretty much as TCP does, rather than making something up, including an initial 4s timeout (which causes return probes from the fileserver to fail if a packet goes missing), and add backoff. (2) Fix the discarding of out-of-order received ACKs. We mustn't let the hard-ACK point regress, nor do we want to do unnecessary retransmission because the soft-ACK list regresses. This is not trivial, however, due to some loose wording in various old protocol specs, the ACK field that should be used for this sometimes has the wrong information in it. (3) Add a tracepoint to log a discarded ACK. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-22Merge tag 'io_uring-5.7-2020-05-22' of git://git.kernel.dk/linux-blockLinus Torvalds
Pull io_uring fixes from Jens Axboe: "A small collection of small fixes that should go into this release: - Two fixes for async request preparation (Pavel) - Busy clear fix for SQPOLL (Xiaoguang) - Don't use kiocb->private for O_DIRECT buf index, some file systems use it (Bijan) - Kill dead check in io_splice() - Ensure sqo_wait is initialized early - Cancel task_work if we fail adding to original process - Only add (IO)pollable requests to iopoll list, fixing a regression in this merge window" * tag 'io_uring-5.7-2020-05-22' of git://git.kernel.dk/linux-block: io_uring: reset -EBUSY error when io sq thread is waken up io_uring: don't add non-IO requests to iopoll pending list io_uring: don't use kiocb.private to store buf_index io_uring: cancel work if task_work_add() fails io_uring: remove dead check in io_splice() io_uring: fix FORCE_ASYNC req preparation io_uring: don't prepare DRAIN reqs twice io_uring: initialize ctx->sqo_wait earlier
2020-05-22block: remove the error_sector argument to blkdev_issue_flushChristoph Hellwig
The argument isn't used by any caller, and drivers don't fill out bi_sector for flush requests either. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-05-21exfat: add the dummy mount options to be backward compatible with staging/exfatNamjae Jeon
As Ubuntu and Fedora release new version used kernel version equal to or higher than v5.4, They started to support kernel exfat filesystem. Linus reported a mount error with new version of exfat on Fedora: exfat: Unknown parameter 'namecase' This is because there is a difference in mount option between old staging/exfat and new exfat. And utf8, debug, and codepage options as well as namecase have been removed from new exfat. This patch add the dummy mount options as deprecated option to be backward compatible with old one. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Eric Sandeen <sandeen@sandeen.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-05-21Merge tag 'fiemap-regression-fix' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4 Pull ext4 fixes from Ted Ts'o: "Fix regression in ext4's FIEMAP handling introduced in v5.7-rc1" * tag 'fiemap-regression-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: fix fiemap size checks for bitmap files ext4: fix EXT4_MAX_LOGICAL_BLOCK macro
2020-05-21block: remove ioctl_by_bdevChristoph Hellwig
No callers left. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-05-20Merge tag 'for-linus-5.7-rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs Pull UBI and UBIFS fixes from Richard Weinberger: - Correctly set next cursor for detailed_erase_block_info debugfs file - Don't use crypto_shash_descsize() for digest size in UBIFS - Remove broken lazytime support from UBIFS * tag 'for-linus-5.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs: ubi: Fix seq_file usage in detailed_erase_block_info debugfs file ubifs: fix wrong use of crypto_shash_descsize() ubifs: remove broken lazytime support
2020-05-20Merge tag 'ovl-fixes-5.7-rc7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs Pull overlayfs fixes from Miklos Szeredi: "Fix two bugs introduced in this cycle and one introduced in v5.5" * tag 'ovl-fixes-5.7-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs: ovl: potential crash in ovl_fid_to_fh() ovl: clear ATTR_OPEN from attr->ia_valid ovl: clear ATTR_FILE from attr->ia_valid
2020-05-20pipe: Fix pipe_full() test in opipe_prep().Tetsuo Handa
syzbot is reporting that splice()ing from non-empty read side to already-full write side causes unkillable task, for opipe_prep() is by error not inverting pipe_full() test. CPU: 0 PID: 9460 Comm: syz-executor.5 Not tainted 5.6.0-rc3-next-20200228-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:rol32 include/linux/bitops.h:105 [inline] RIP: 0010:iterate_chain_key kernel/locking/lockdep.c:369 [inline] RIP: 0010:__lock_acquire+0x6a3/0x5270 kernel/locking/lockdep.c:4178 Call Trace: lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4720 __mutex_lock_common kernel/locking/mutex.c:956 [inline] __mutex_lock+0x156/0x13c0 kernel/locking/mutex.c:1103 pipe_lock_nested fs/pipe.c:66 [inline] pipe_double_lock+0x1a0/0x1e0 fs/pipe.c:104 splice_pipe_to_pipe fs/splice.c:1562 [inline] do_splice+0x35f/0x1520 fs/splice.c:1141 __do_sys_splice fs/splice.c:1447 [inline] __se_sys_splice fs/splice.c:1427 [inline] __x64_sys_splice+0x2b5/0x320 fs/splice.c:1427 do_syscall_64+0xf6/0x790 arch/x86/entry/common.c:295 entry_SYSCALL_64_after_hwframe+0x49/0xbe Reported-by: syzbot+b48daca8639150bc5e73@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?id=9386d051e11e09973d5a4cf79af5e8cedf79386d Fixes: 8cefc107ca54c8b0 ("pipe: Use head and tail pointers for the ring, not cursor and length") Cc: stable@vger.kernel.org # 5.5+ Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-05-20fs: rename pipe_buf ->steal to ->try_stealChristoph Hellwig
And replace the arcane return value convention with a simple bool where true means success and false means failure. [AV: braino fix folded in] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-05-20fs: make the pipe_buf_operations ->confirm operation optionalChristoph Hellwig
Just return 0 for success if it is not present. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-05-20fs: make the pipe_buf_operations ->steal operation optionalChristoph Hellwig
Just return 1 for failure if it is not present. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-05-20pipe: merge anon_pipe_buf*_opsChristoph Hellwig
All the op vectors are exactly the same, they are just used to encode packet or nomerge behavior. There already is a flag for the packet behavior, so just add a new one to allow for merging. Inverting it vs the previous nomerge special casing actually allows for much nicer code. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-05-20fs: simplify do_splice_fromChristoph Hellwig
No need for a local function pointer when we can trivial branch on the ->splice_write presence. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-05-20fs: simplify do_splice_toChristoph Hellwig
No need for a local function pointer when we can trivial branch on the ->splice_read presence. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>