summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2015-05-11Merge branch 'for-4.1' of git://linux-nfs.org/~bfields/linuxLinus Torvalds
Pull nfsd bugfixes from Bruce Fields: "Mainly pnfs fixes (and for problems with generic callback code made more obvious by pnfs)" * 'for-4.1' of git://linux-nfs.org/~bfields/linux: nfsd: skip CB_NULL probes for 4.1 or later nfsd: fix callback restarts nfsd: split transport vs operation errors for callbacks svcrpc: fix potential GSSX_ACCEPT_SEC_CONTEXT decoding failures nfsd: fix pNFS return on close semantics nfsd: fix the check for confirmed openowner in nfs4_preprocess_stateid_op nfsd/blocklayout: pretend we can send deviceid notifications
2015-05-11Btrfs: fix race when reusing stale extent buffers that leads to BUG_ONFilipe Manana
There's a race between releasing extent buffers that are flagged as stale and recycling them that makes us it the following BUG_ON at btrfs_release_extent_buffer_page: BUG_ON(extent_buffer_under_io(eb)) The BUG_ON is triggered because the extent buffer has the flag EXTENT_BUFFER_DIRTY set as a consequence of having been reused and made dirty by another concurrent task. Here follows a sequence of steps that leads to the BUG_ON. CPU 0 CPU 1 CPU 2 path->nodes[0] == eb X X->refs == 2 (1 for the tree, 1 for the path) btrfs_header_generation(X) == current trans id flag EXTENT_BUFFER_DIRTY set on X btrfs_release_path(path) unlocks X reads eb X X->refs incremented to 3 locks eb X btrfs_del_items(X) X becomes empty clean_tree_block(X) clear EXTENT_BUFFER_DIRTY from X btrfs_del_leaf(X) unlocks X extent_buffer_get(X) X->refs incremented to 4 btrfs_free_tree_block(X) X's range is not pinned X's range added to free space cache free_extent_buffer_stale(X) lock X->refs_lock set EXTENT_BUFFER_STALE on X release_extent_buffer(X) X->refs decremented to 3 unlocks X->refs_lock btrfs_release_path() unlocks X free_extent_buffer(X) X->refs becomes 2 __btrfs_cow_block(Y) btrfs_alloc_tree_block() btrfs_reserve_extent() find_free_extent() gets offset == X->start btrfs_init_new_buffer(X->start) btrfs_find_create_tree_block(X->start) alloc_extent_buffer(X->start) find_extent_buffer(X->start) finds eb X in radix tree free_extent_buffer(X) lock X->refs_lock test X->refs == 2 test bit EXTENT_BUFFER_STALE is set test !extent_buffer_under_io(eb) increments X->refs to 3 mark_extent_buffer_accessed(X) check_buffer_tree_ref(X) --> does nothing, X->refs >= 2 and EXTENT_BUFFER_TREE_REF is set in X clear EXTENT_BUFFER_STALE from X locks X btrfs_mark_buffer_dirty() set_extent_buffer_dirty(X) check_buffer_tree_ref(X) --> does nothing, X->refs >= 2 and EXTENT_BUFFER_TREE_REF is set sets EXTENT_BUFFER_DIRTY on X test and clear EXTENT_BUFFER_TREE_REF decrements X->refs to 2 release_extent_buffer(X) decrements X->refs to 1 unlock X->refs_lock unlock X free_extent_buffer(X) lock X->refs_lock release_extent_buffer(X) decrements X->refs to 0 btrfs_release_extent_buffer_page(X) BUG_ON(extent_buffer_under_io(X)) --> EXTENT_BUFFER_DIRTY set on X Fix this by making find_extent buffer wait for any ongoing task currently executing free_extent_buffer()/free_extent_buffer_stale() if the extent buffer has the stale flag set. A more clean alternative would be to always increment the extent buffer's reference count while holding its refs_lock spinlock but find_extent_buffer is a performance critical area and that would cause lock contention whenever multiple tasks search for the same extent buffer concurrently. A build server running a SLES 12 kernel (3.12 kernel + over 450 upstream btrfs patches backported from newer kernels) was hitting this often: [1212302.461948] kernel BUG at ../fs/btrfs/extent_io.c:4507! (...) [1212302.470219] CPU: 1 PID: 19259 Comm: bs_sched Not tainted 3.12.36-38-default #1 [1212302.540792] Hardware name: Supermicro PDSM4/PDSM4, BIOS 6.00 04/17/2006 [1212302.540792] task: ffff8800e07e0100 ti: ffff8800d6412000 task.ti: ffff8800d6412000 [1212302.540792] RIP: 0010:[<ffffffffa0507081>] [<ffffffffa0507081>] btrfs_release_extent_buffer_page.constprop.51+0x101/0x110 [btrfs] (...) [1212302.630008] Call Trace: [1212302.630008] [<ffffffffa05070cd>] release_extent_buffer+0x3d/0xa0 [btrfs] [1212302.630008] [<ffffffffa04c2d9d>] btrfs_release_path+0x1d/0xa0 [btrfs] [1212302.630008] [<ffffffffa04c5c7e>] read_block_for_search.isra.33+0x13e/0x3a0 [btrfs] [1212302.630008] [<ffffffffa04c8094>] btrfs_search_slot+0x3f4/0xa80 [btrfs] [1212302.630008] [<ffffffffa04cf5d8>] lookup_inline_extent_backref+0xf8/0x630 [btrfs] [1212302.630008] [<ffffffffa04d13dd>] __btrfs_free_extent+0x11d/0xc40 [btrfs] [1212302.630008] [<ffffffffa04d64a4>] __btrfs_run_delayed_refs+0x394/0x11d0 [btrfs] [1212302.630008] [<ffffffffa04db379>] btrfs_run_delayed_refs.part.66+0x69/0x280 [btrfs] [1212302.630008] [<ffffffffa04ed2ad>] __btrfs_end_transaction+0x2ad/0x3d0 [btrfs] [1212302.630008] [<ffffffffa04f7505>] btrfs_evict_inode+0x4a5/0x500 [btrfs] [1212302.630008] [<ffffffff811b9e28>] evict+0xa8/0x190 [1212302.630008] [<ffffffff811b0330>] do_unlinkat+0x1a0/0x2b0 I was also able to reproduce this on a 3.19 kernel, corresponding to Chris' integration branch from about a month ago, running the following stress test on a qemu/kvm guest (with 4 virtual cpus and 16Gb of ram): while true; do mkfs.btrfs -l 4096 -f -b `expr 20 \* 1024 \* 1024 \* 1024` /dev/sdd mount /dev/sdd /mnt snapshot_cmd="btrfs subvolume snapshot -r /mnt" snapshot_cmd="$snapshot_cmd /mnt/snap_\`date +'%H_%M_%S_%N'\`" fsstress -d /mnt -n 25000 -p 8 -x "$snapshot_cmd" -X 100 umount /mnt done Which usually triggers the BUG_ON within less than 24 hours: [49558.618097] ------------[ cut here ]------------ [49558.619732] kernel BUG at fs/btrfs/extent_io.c:4551! (...) [49558.620031] CPU: 3 PID: 23908 Comm: fsstress Tainted: G W 3.19.0-btrfs-next-7+ #3 [49558.620031] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014 [49558.620031] task: ffff8800319fc0d0 ti: ffff880220da8000 task.ti: ffff880220da8000 [49558.620031] RIP: 0010:[<ffffffffa0476b1a>] [<ffffffffa0476b1a>] btrfs_release_extent_buffer_page+0x20/0xe9 [btrfs] (...) [49558.620031] Call Trace: [49558.620031] [<ffffffffa0476c73>] release_extent_buffer+0x90/0xd3 [btrfs] [49558.620031] [<ffffffff8142b10c>] ? _raw_spin_lock+0x3b/0x43 [49558.620031] [<ffffffffa0477052>] ? free_extent_buffer+0x37/0x94 [btrfs] [49558.620031] [<ffffffffa04770ab>] free_extent_buffer+0x90/0x94 [btrfs] [49558.620031] [<ffffffffa04396d5>] btrfs_release_path+0x4a/0x69 [btrfs] [49558.620031] [<ffffffffa0444907>] __btrfs_free_extent+0x778/0x80c [btrfs] [49558.620031] [<ffffffffa044a485>] __btrfs_run_delayed_refs+0xad2/0xc62 [btrfs] [49558.728054] [<ffffffff811420d5>] ? kmemleak_alloc_recursive.constprop.52+0x16/0x18 [49558.728054] [<ffffffffa044c1e8>] btrfs_run_delayed_refs+0x6d/0x1ba [btrfs] [49558.728054] [<ffffffffa045917f>] ? join_transaction.isra.9+0xb9/0x36b [btrfs] [49558.728054] [<ffffffffa045a75c>] btrfs_commit_transaction+0x4c/0x981 [btrfs] [49558.728054] [<ffffffffa0434f86>] btrfs_sync_fs+0xd5/0x10d [btrfs] [49558.728054] [<ffffffff81155923>] ? iterate_supers+0x60/0xc4 [49558.728054] [<ffffffff8117966a>] ? do_sync_work+0x91/0x91 [49558.728054] [<ffffffff8117968a>] sync_fs_one_sb+0x20/0x22 [49558.728054] [<ffffffff81155939>] iterate_supers+0x76/0xc4 [49558.728054] [<ffffffff811798e8>] sys_sync+0x55/0x83 [49558.728054] [<ffffffff8142bbd2>] system_call_fastpath+0x12/0x17 Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com>
2015-05-11Btrfs: fix race between block group creation and their cache writeoutFilipe Manana
So creating a block group has 2 distinct phases: Phase 1 - creates the btrfs_block_group_cache item and adds it to the rbtree fs_info->block_group_cache_tree and to the corresponding list space_info->block_groups[]; Phase 2 - adds the block group item to the extent tree and corresponding items to the chunk tree. The first phase adds the block_group_cache_item to a list of pending block groups in the transaction handle, and phase 2 happens when btrfs_end_transaction() is called against the transaction handle. It happens that once phase 1 completes, other concurrent tasks that use their own transaction handle, but points to the same running transaction (struct btrfs_trans_handle->transaction), can use this block group for space allocations and therefore mark it dirty. Dirty block groups are tracked in a list belonging to the currently running transaction (struct btrfs_transaction) and not in the transaction handle (btrfs_trans_handle). This is a problem because once a task calls btrfs_commit_transaction(), it calls btrfs_start_dirty_block_groups() which will see all dirty block groups and attempt to start their writeout, including those that are still attached to the transaction handle of some concurrent task that hasn't called btrfs_end_transaction() yet - which means those block groups haven't gone through phase 2 yet and therefore when write_one_cache_group() is called, it won't find the block group items in the extent tree and abort the current transaction with -ENOENT, turning the fs into readonly mode and require a remount. Fix this by ignoring -ENOENT when looking for block group items in the extent tree when we attempt to start the writeout of the block group caches outside the critical section of the transaction commit. We will try again later during the critical section and if there we still don't find the block group item in the extent tree, we then abort the current transaction. This issue happened twice, once while running fstests btrfs/067 and once for btrfs/078, which produced the following trace: [ 3278.703014] WARNING: CPU: 7 PID: 18499 at fs/btrfs/super.c:260 __btrfs_abort_transaction+0x52/0x114 [btrfs]() [ 3278.707329] BTRFS: Transaction aborted (error -2) (...) [ 3278.731555] Call Trace: [ 3278.732396] [<ffffffff8142fa46>] dump_stack+0x4f/0x7b [ 3278.733860] [<ffffffff8108b6a2>] ? console_unlock+0x361/0x3ad [ 3278.735312] [<ffffffff81045ea5>] warn_slowpath_common+0xa1/0xbb [ 3278.736874] [<ffffffffa03ada6d>] ? __btrfs_abort_transaction+0x52/0x114 [btrfs] [ 3278.738302] [<ffffffff81045f05>] warn_slowpath_fmt+0x46/0x48 [ 3278.739520] [<ffffffffa03ada6d>] __btrfs_abort_transaction+0x52/0x114 [btrfs] [ 3278.741222] [<ffffffffa03b9e56>] write_one_cache_group+0xae/0xbf [btrfs] [ 3278.742797] [<ffffffffa03c487b>] btrfs_start_dirty_block_groups+0x170/0x2b2 [btrfs] [ 3278.744492] [<ffffffffa03d309c>] btrfs_commit_transaction+0x130/0x9c9 [btrfs] [ 3278.746084] [<ffffffff8107d33d>] ? trace_hardirqs_on+0xd/0xf [ 3278.747249] [<ffffffffa03e5660>] btrfs_sync_file+0x313/0x387 [btrfs] [ 3278.748744] [<ffffffff8117acad>] vfs_fsync_range+0x95/0xa4 [ 3278.749958] [<ffffffff81435b54>] ? ret_from_sys_call+0x1d/0x58 [ 3278.751218] [<ffffffff8117acd8>] vfs_fsync+0x1c/0x1e [ 3278.754197] [<ffffffff8117ae54>] do_fsync+0x34/0x4e [ 3278.755192] [<ffffffff8117b07c>] SyS_fsync+0x10/0x14 [ 3278.756236] [<ffffffff81435b32>] system_call_fastpath+0x12/0x17 [ 3278.757366] ---[ end trace 9a4d4df4969709aa ]--- Fixes: 1bbc621ef284 ("Btrfs: allow block group cache writeout outside critical section in commit") Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-05-11Btrfs: fix panic when starting bg cache writeout after IO errorFilipe Manana
When waiting for the writeback of block group cache we returned immediately if there was an error during writeback without waiting for the ordered extent to complete. This left a short time window where if some other task attempts to start the writeout for the same block group cache it can attempt to add a new ordered extent, starting at the same offset (0) before the previous one is removed from the ordered tree, causing an ordered tree panic (calls BUG()). This normally doesn't happen in other write paths, such as buffered writes or direct IO writes for regular files, since before marking page ranges dirty we lock the ranges and wait for any ordered extents within the range to complete first. Fix this by making btrfs_wait_ordered_range() not return immediately if it gets an error from the writeback, waiting for all ordered extents to complete first. This issue happened often when running the fstest btrfs/088 and it's easy to trigger it by running in a loop until the panic happens: for ((i = 1; i <= 10000; i++)) do ./check btrfs/088 ; done [17156.862573] BTRFS critical (device sdc): panic in ordered_data_tree_panic:70: Inconsistency in ordered tree at offset 0 (errno=-17 Object already exists) [17156.864052] ------------[ cut here ]------------ [17156.864052] kernel BUG at fs/btrfs/ordered-data.c:70! (...) [17156.864052] Call Trace: [17156.864052] [<ffffffffa03876e3>] btrfs_add_ordered_extent+0x12/0x14 [btrfs] [17156.864052] [<ffffffffa03787e2>] run_delalloc_nocow+0x5bf/0x747 [btrfs] [17156.864052] [<ffffffffa03789ff>] run_delalloc_range+0x95/0x353 [btrfs] [17156.864052] [<ffffffffa038b7fe>] writepage_delalloc.isra.16+0xb9/0x13f [btrfs] [17156.864052] [<ffffffffa038d75b>] __extent_writepage+0x129/0x1f7 [btrfs] [17156.864052] [<ffffffffa038da5a>] extent_write_cache_pages.isra.15.constprop.28+0x231/0x2f4 [btrfs] [17156.864052] [<ffffffff810ad2af>] ? __module_text_address+0x12/0x59 [17156.864052] [<ffffffff8107d33d>] ? trace_hardirqs_on+0xd/0xf [17156.864052] [<ffffffffa038df76>] extent_writepages+0x4b/0x5c [btrfs] [17156.864052] [<ffffffff81144431>] ? kmem_cache_free+0x9b/0xce [17156.864052] [<ffffffffa0376a46>] ? btrfs_submit_direct+0x3fc/0x3fc [btrfs] [17156.864052] [<ffffffffa0389cd6>] ? free_extent_state+0x8c/0xc1 [btrfs] [17156.864052] [<ffffffffa0374871>] btrfs_writepages+0x28/0x2a [btrfs] [17156.864052] [<ffffffff8110c4c8>] do_writepages+0x23/0x2c [17156.864052] [<ffffffff81102f36>] __filemap_fdatawrite_range+0x5a/0x61 [17156.864052] [<ffffffff81102f6e>] filemap_fdatawrite_range+0x13/0x15 [17156.864052] [<ffffffffa0383ef7>] btrfs_fdatawrite_range+0x21/0x48 [btrfs] [17156.864052] [<ffffffffa03ab89e>] __btrfs_write_out_cache.isra.14+0x2d9/0x3a7 [btrfs] [17156.864052] [<ffffffffa03ac1ab>] ? btrfs_write_out_cache+0x41/0xdc [btrfs] [17156.864052] [<ffffffffa03ac1fd>] btrfs_write_out_cache+0x93/0xdc [btrfs] [17156.864052] [<ffffffffa0363847>] ? btrfs_start_dirty_block_groups+0x13a/0x2b2 [btrfs] [17156.864052] [<ffffffffa03638e6>] btrfs_start_dirty_block_groups+0x1d9/0x2b2 [btrfs] [17156.864052] [<ffffffff8107d33d>] ? trace_hardirqs_on+0xd/0xf [17156.864052] [<ffffffffa037209e>] btrfs_commit_transaction+0x130/0x9c9 [btrfs] [17156.864052] [<ffffffffa034c748>] btrfs_sync_fs+0xe1/0x12d [btrfs] Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-05-11Btrfs: fix crash after inode cache writeback failureFilipe Manana
If the writeback of an inode cache failed we were unnecessarilly attempting to release again the delalloc metadata that we previously reserved. However attempting to do this a second time triggers an assertion at drop_outstanding_extent() because we have no more outstanding extents for our inode cache's inode. If we were able to start writeback of the cache the reserved metadata space is released at btrfs_finished_ordered_io(), even if an error happens during writeback. So make sure we don't repeat the metadata space release if writeback started for our inode cache. This issue was trivial to reproduce by running the fstest btrfs/088 with "-o inode_cache", which triggered the assertion leading to a BUG() call and requiring a reboot in order to run the remaining fstests. Trace produced by btrfs/088: [255289.385904] BTRFS: assertion failed: BTRFS_I(inode)->outstanding_extents >= num_extents, file: fs/btrfs/extent-tree.c, line: 5276 [255289.388094] ------------[ cut here ]------------ [255289.389184] kernel BUG at fs/btrfs/ctree.h:4057! [255289.390125] invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC (...) [255289.392068] Call Trace: [255289.392068] [<ffffffffa035e774>] drop_outstanding_extent+0x3d/0x6d [btrfs] [255289.392068] [<ffffffffa0364988>] btrfs_delalloc_release_metadata+0x54/0xe3 [btrfs] [255289.392068] [<ffffffffa03b4174>] btrfs_write_out_ino_cache+0x95/0xad [btrfs] [255289.392068] [<ffffffffa036f5c4>] btrfs_save_ino_cache+0x275/0x2dc [btrfs] [255289.392068] [<ffffffffa03e2d83>] commit_fs_roots.isra.12+0xaa/0x137 [btrfs] [255289.392068] [<ffffffff8107d33d>] ? trace_hardirqs_on+0xd/0xf [255289.392068] [<ffffffffa037841f>] ? btrfs_commit_transaction+0x4b1/0x9c9 [btrfs] [255289.392068] [<ffffffff814351a4>] ? _raw_spin_unlock+0x32/0x46 [255289.392068] [<ffffffffa037842e>] btrfs_commit_transaction+0x4c0/0x9c9 [btrfs] (...) Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
2015-05-11net: Add a struct net parameter to sock_create_kernEric W. Biederman
This is long overdue, and is part of cleaning up how we allocate kernel sockets that don't reference count struct net. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-11namei: store seq numbers in nd->stack[]Al Viro
we'll need them for unlazy_walk() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11new helper: __legitimize_mnt()Al Viro
same as legitimize_mnt(), except that it does *not* drop and regain rcu_read_lock; return values are 0 => grabbed a reference, we are fine 1 => failed, just go away -1 => failed, go away and mntput(bastard) when outside of rcu_read_lock Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11namei: make may_follow_link() safe in RCU modeAl Viro
We *can't* call that audit garbage in RCU mode - it's doing a weird mix of allocations (GFP_NOFS, immediately followed by GFP_KERNEL) and I'm not touching that... thing again. So if this security sclero^Whardening feature gets triggered when we are in RCU mode, tough - we'll fail with -ECHILD and have everything restarted in non-RCU mode. Only to hit the same test and fail, this time with EACCES and with (oh, rapture) an audit spew produced. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11namei: make put_link() RCU-safeAl Viro
very simple - just make path_put() conditional on !RCU. Note that right now it doesn't get called in RCU mode - we leave it before getting anything into stack. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11new helper: free_page_put_link()Al Viro
similar to kfree_put_link() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11switch ->put_link() from dentry to inodeAl Viro
only one instance looks at that argument at all; that sole exception wants inode rather than dentry. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11security: make inode_follow_link RCU-walk awareNeilBrown
inode_follow_link now takes an inode and rcu flag as well as the dentry. inode is used in preference to d_backing_inode(dentry), particularly in RCU-walk mode. selinux_inode_follow_link() gets dentry_has_perm() and inode_has_perm() open-coded into it so that it can call avc_has_perm_flags() in way that is safe if LOOKUP_RCU is set. Calling avc_has_perm_flags() with rcu_read_lock() held means that when avc_has_perm_noaudit calls avc_compute_av(), the attempt to rcu_read_unlock() before calling security_compute_av() will not actually drop the RCU read-lock. However as security_compute_av() is completely in a read_lock()ed region, it should be safe with the RCU read-lock held. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11namei: pick_link() callers already have inodeAl Viro
no need to refetch (and once we move unlazy out of there, recheck ->d_seq). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11VFS: Handle lower layer dentry/inode in pathwalkDavid Howells
Make use of d_backing_inode() in pathwalk to gain access to an inode or dentry that's on a lower layer. Signed-off-by: David Howells <dhowells@redhat.com>
2015-05-11namei: store inode in nd->stack[]Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11namei: don't mangle nd->seq in lookup_fast()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11namei: explicitly pass seq number to unlazy_walk() when dentry != NULLAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11link_path_walk: use explicit returns for failure exitsAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11namei: lift terminate_walk() all the way upAl Viro
Lift it from link_path_walk(), trailing_symlink(), lookup_last(), mountpoint_last(), complete_walk() and do_last(). A _lot_ of those suckers merge. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11namei: lift link_path_walk() call out of trailing_symlink()Al Viro
Make trailing_symlink() return the pathname to traverse or ERR_PTR(-E...). A subtle point is that for "magic" symlinks it returns "" now - that leads to link_path_walk("", nd), which is immediately returning 0 and we are back to the treatment of the last component, at whereever the damn thing has left us. Reduces the stack footprint - link_path_walk() called on more shallow stack now. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11namei: path_init() calling conventions changeAl Viro
* lift link_path_walk() into callers; moving it down into path_init() had been a mistake. Stack footprint, among other things... * do _not_ call path_cleanup() after path_init() failure; on all failure exits out of it we have nothing for path_cleanup() to do * have path_init() return pathname or ERR_PTR(-E...) Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-11namei: get rid of nameidata->baseAl Viro
we can do fdput() under rcu_read_lock() just fine; all we need to take care of is fetching nd->inode value first. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: split off filename_lookupat() with LOOKUP_PARENTAl Viro
new functions: filename_parentat() and path_parentat() resp. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: may_follow_link() - lift terminate_walk() on failures into callerAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: take increment of nd->depth into pick_link()Al Viro
Makes the situation much more regular - we avoid a strange state when the element just after the top of stack is used to store struct path of symlink, but isn't counted in nd->depth. This is much more regular, so the normal failure exits, etc., work fine. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: kill nd->linkAl Viro
Just store it in nd->stack[nd->depth].link right in pick_link(). Now that we make sure of stack expansion in pick_link(), we can do so... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10may_follow_link(): trim argumentsAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: move bumping the refcount of link->mnt into pick_link()Al Viro
update the failure cleanup in may_follow_link() to match that. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: fold put_link() into the failure case of complete_walk()Al Viro
... and don't open-code unlazy_walk() in there - the only reason for that is to avoid verfication of cached nd->root, which is trivially avoided by discarding said cached nd->root first. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: take the treatment of absolute symlinks to get_link()Al Viro
rather than letting the callers handle the jump-to-root part of semantics, do it right in get_link() and return the rest of the body for the caller to deal with - at that point it's treated the same way as relative symlinks would be. And return NULL when there's no "rest of the body" - those are treated the same as pure jump symlink would be. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: simpler treatment of symlinks with nothing other that / in the bodyAl Viro
Instead of saving name and branching to OK:, where we'll immediately restore it, and call walk_component() with WALK_PUT|WALK_GET and nd->last_type being LAST_BIND, which is equivalent to put_link(nd), err = 0, we can just treat that the same way we'd treat procfs-style "jump" symlinks - do put_link(nd) and move on. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: simplify failure exits in get_link()Al Viro
when cookie is NULL, put_link() is equivalent to path_put(), so as soon as we'd set last->cookie to NULL, we can bump nd->depth and let the normal logics in terminate_walk() to take care of cleanups. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10don't pass nameidata to ->follow_link()Al Viro
its only use is getting passed to nd_jump_link(), which can obtain it from current->nameidata Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: simplify the callers of follow_managed()Al Viro
now that it gets nameidata, no reason to have setting LOOKUP_JUMPED on mountpoint crossing and calling path_put_conditional() on failures done in every caller. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10VFS: replace {, total_}link_count in task_struct with pointer to nameidataNeilBrown
task_struct currently contains two ad-hoc members for use by the VFS: link_count and total_link_count. These are only interesting to fs/namei.c, so exposing them explicitly is poor layering. Incidentally, link_count isn't used anymore, so it can just die. This patches replaces those with a single pointer to 'struct nameidata'. This structure represents the current filename lookup of which there can only be one per process, and is a natural place to store total_link_count. This will allow the current "nameidata" argument to all follow_link operations to be removed as current->nameidata can be used instead in the _very_ few instances that care about it at all. As there are occasional circumstances where pathname lookup can recurse, such as through kern_path_locked, we always save and old current->nameidata (if there is one) when setting a new value, and make sure any active link_counts are preserved. follow_mount and follow_automount now get a 'struct nameidata *' rather than 'int flags' so that they can directly access total_link_count, rather than going through 'current'. Suggested-by: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: move link count check and stack allocation into pick_link()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: make should_follow_link() store the link in nd->linkAl Viro
... if it decides to follow, that is. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: new calling conventions for walk_component()Al Viro
instead of a single flag (!= 0 => we want to follow symlinks) pass two bits - WALK_GET (want to follow symlinks) and WALK_PUT (put_link() once we are done looking at the name). The latter matters only for success exits - on failure the caller will discard everything anyway. Suggestions for better variant are welcome; what this thing aims for is making sure that pending put_link() is done *before* walk_component() decides to pick a symlink up, rather than between picking it up and acting upon it. See the next commit for payoff. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10link_path_walk: move the OK: inside the loopAl Viro
fewer labels that way; in particular, resuming after the end of nested symlink is straight-line. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: have terminate_walk() do put_link() on everything leftAl Viro
All callers of terminate_walk() are followed by more or less open-coded eqiuvalent of "do put_link() on everything left in nd->stack". Better done in terminate_walk() itself, and when we go for RCU symlink traversal we'll have to do it there anyway. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: take put_link() into {lookup,mountpoint,do}_last()Al Viro
rationale: we'll need to have terminate_walk() do put_link() on everything, which will mean that in some cases ..._last() will do put_link() anyway. Easier to have them do it in all cases. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: lift (open-coded) terminate_walk() into callers of get_link()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10lift terminate_walk() into callers of walk_component()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: lift (open-coded) terminate_walk() in follow_dotdot_rcu() into callersAl Viro
follow_dotdot_rcu() does an equivalent of terminate_walk() on failure; shifting it into callers makes for simpler rules and those callers already have terminate_walk() on other failure exits. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10namei: we never need more than MAXSYMLINKS entries in nd->stackAl Viro
The only reason why we needed one more was that purely nested MAXSYMLINKS symlinks could lead to path_init() using that many entries in addition to nd->stack[0] which it left unused. That can't happen now - path_init() starts with entry 0 (and trailing_symlink() is called only when we'd already encountered one symlink, so no more than MAXSYMLINKS-1 are left). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10link_path_walk: end of nd->depth massageAl Viro
get rid of orig_depth - we only use it on error exit to tell whether to stop doing put_link() when depth reaches 0 (call from path_init()) or when it reaches 1 (call from trailing_symlink()). However, in the latter case the caller would immediately follow with one more put_link(). Just keep doing it until the depth reaches zero (and simplify trailing_symlink() as the result). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10link_path_walk: nd->depth massage, part 10Al Viro
Get rid of orig_depth checks in OK: logics. If nd->depth is zero, we had been called from path_init() and we are done. If it is greater than 1, we are not done, whether we'd been called from path_init() or trailing_symlink(). And in case when it's 1, we might have been called from path_init() and reached the end of nested symlink (in which case nd->stack[0].name will point to the rest of pathname and we are not done) or from trailing_symlink(), in which case we are done. Just have trailing_symlink() leave NULL in nd->stack[0].name and use that to discriminate between those cases. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10link_path_walk: nd->depth massage, part 9Al Viro
Make link_path_walk() work with any value of nd->depth on entry - memorize it and use it in tests instead of comparing with 1. Don't bother with increment/decrement in path_init(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-10put_link: nd->depth massage, part 8Al Viro
all calls are preceded by decrement of nd->depth; move it into put_link() itself. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>