summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2020-03-14ext4: optimize ext4_ext_precache for 0 depthRitesh Harjani
This patch avoids the memory alloc & free path when depth is 0, since anyway there is no extra caching done in that case. So on checking depth 0, simply return early. Signed-off-by: Ritesh Harjani <riteshh@linux.ibm.com> Reviewed-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/93da0d0f073c73358e85bb9849d8a5378d1da539.1582880246.git.riteshh@linux.ibm.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2020-03-14ext4: add IOMAP_F_MERGED for non-extent based mappingRitesh Harjani
IOMAP_F_MERGED needs to be set in case of non-extent based mapping. This is needed in later patches for conversion of ext4_fiemap to use iomap. Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Ritesh Harjani <riteshh@linux.ibm.com> Link: https://lore.kernel.org/r/a4764c91c08c16d4d4a4b36defb2a08625b0e9b3.1582880246.git.riteshh@linux.ibm.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2020-03-13Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf-next 2020-03-13 The following pull-request contains BPF updates for your *net-next* tree. We've added 86 non-merge commits during the last 12 day(s) which contain a total of 107 files changed, 5771 insertions(+), 1700 deletions(-). The main changes are: 1) Add modify_return attach type which allows to attach to a function via BPF trampoline and is run after the fentry and before the fexit programs and can pass a return code to the original caller, from KP Singh. 2) Generalize BPF's kallsyms handling and add BPF trampoline and dispatcher objects to be visible in /proc/kallsyms so they can be annotated in stack traces, from Jiri Olsa. 3) Extend BPF sockmap to allow for UDP next to existing TCP support in order in order to enable this for BPF based socket dispatch, from Lorenz Bauer. 4) Introduce a new bpftool 'prog profile' command which attaches to existing BPF programs via fentry and fexit hooks and reads out hardware counters during that period, from Song Liu. Example usage: bpftool prog profile id 337 duration 3 cycles instructions llc_misses 4228 run_cnt 3403698 cycles (84.08%) 3525294 instructions # 1.04 insn per cycle (84.05%) 13 llc_misses # 3.69 LLC misses per million isns (83.50%) 5) Batch of improvements to libbpf, bpftool and BPF selftests. Also addition of a new bpf_link abstraction to keep in particular BPF tracing programs attached even when the applicaion owning them exits, from Andrii Nakryiko. 6) New bpf_get_current_pid_tgid() helper for tracing to perform PID filtering and which returns the PID as seen by the init namespace, from Carlos Neira. 7) Refactor of RISC-V JIT code to move out common pieces and addition of a new RV32G BPF JIT compiler, from Luke Nelson. 8) Add gso_size context member to __sk_buff in order to be able to know whether a given skb is GSO or not, from Willem de Bruijn. 9) Add a new bpf_xdp_output() helper which reuses XDP's existing perf RB output implementation but can be called from tracepoint programs, from Eelco Chaudron. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-13follow_dotdot{,_rcu}(): switch to use of step_into()Al Viro
gets the regular mount crossing on result of .. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13handle_dots(), follow_dotdot{,_rcu}(): preparation to switch to step_into()Al Viro
Right now the tail ends of follow_dotdot{,_rcu}() are pretty much the open-coded analogues of step_into(). The differences: * the lack of proper LOOKUP_NO_XDEV handling in non-RCU case (arguably a bug) * the lack of ->d_manage() handling (again, arguably a bug) Adjust the calling conventions so that on the next step with could just switch those functions to returning step_into(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13move handle_dots(), follow_dotdot() and follow_dotdot_rcu() past step_into()Al Viro
pure move; we are going to have step_into() called by that bunch. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13follow_dotdot{,_rcu}(): lift LOOKUP_BENEATH checks out of loopAl Viro
Behaviour change: LOOKUP_BENEATH lookup of .. in absolute root yields an error even if it's not the process' root. That's possible only if you'd managed to escape chroot jail by way of procfs symlinks, but IMO the resulting behaviour is not worse - more consistent and easier to describe: ".." in root is "stay where you are", uness LOOKUP_BENEATH has been given, in which case it's "fail with EXDEV". Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13follow_dotdot{,_rcu}(): lift switching nd->path to parent out of loopAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13expand path_parent_directory() in its callersAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13path_parent_directory(): leave changing path->dentry to callersAl Viro
Instead of returning 0, return new dentry; instead of returning -ENOENT, return NULL. Adjust the callers accordingly. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13path_connected(): pass mount and dentry separatelyAl Viro
eventually we'll want to do that check *before* mangling nd->path.dentry... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13split the lookup-related parts of do_last() into a separate helperAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13do_last(): rejoin the common path even earlier in FMODE_{OPENED,CREATED} caseAl Viro
... getting may_create_in_sticky() checks in FMODE_OPENED case as well. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13do_last(): simplify the liveness analysis past finish_open_createdAl Viro
Don't mess with got_write there - it is guaranteed to be false on entry and it will be set true if and only if we decide to go for truncation and manage to get write access for that. Don't carry acc_mode through the entire thing - it's only used in that part. And don't bother with gotos in there - compiler is quite capable of optimizing that. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13do_last(): rejoing the common path earlier in FMODE_{OPENED,CREATED} caseAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13do_last(): don't bother with keeping got_write in FMODE_OPENED caseAl Viro
it's easier to drop it right after lookup_open() and regain if needed (i.e. if we will need to truncate). On the non-FMODE_OPENED path we do that anyway. In case of FMODE_CREATED we won't be needing it. And it's easier to prove correctness that way, especially since the initial failure to get write access is not always fatal; proving that we'll never end up truncating in that case is rather convoluted. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13do_last(): merge the may_open() callsAl Viro
have FMODE_OPENED case rejoin the main path at earlier point Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13atomic_open(): lift the call of may_open() into do_last()Al Viro
there we'll be able to merge it with its counterparts in other cases, and there's no reason to do it before the parent has been unlocked Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13atomic_open(): return the right dentry in FMODE_OPENED caseAl Viro
->atomic_open() might have used a different alias than the one we'd passed to it; in "not opened" case we take care of that, in "opened" one we don't. Currently we don't care downstream of "opened" case which alias to return; however, that will change shortly when we get to unifying may_open() calls. It's not hard to get right in all cases, anyway. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13new helper: traverse_mounts()Al Viro
common guts of follow_down() and follow_managed() taken to a new helper - traverse_mounts(). The remnants of follow_managed() are folded into its sole remaining caller (handle_mounts()). Calling conventions of handle_mounts() slightly sanitized - instead of the weird "1 for success, -E... for failure" that used to be imposed by the calling conventions of walk_component() et.al. we can use the normal "0 for success, -E... for failure". Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13massage __follow_mount_rcu() a bitAl Viro
make the loop more similar to that in follow_managed(), with explicit tracking of flags, etc. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13namei: have link_path_walk() maintain LOOKUP_PARENTAl Viro
set on entry, clear when we get to the last component. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13link_path_walk(): simplify stack handlingAl Viro
We use nd->stack to store two things: pinning down the symlinks we are resolving and resuming the name traversal when a nested symlink is finished. Currently, nd->depth is used to keep track of both. It's 0 when we call link_path_walk() for the first time (for the pathname itself) and 1 on all subsequent calls (for trailing symlinks, if any). That's fine, as far as pinning symlinks goes - when handling a trailing symlink, the string we are interpreting is the body of symlink pinned down in nd->stack[0]. It's rather inconvenient with respect to handling nested symlinks, though - when we run out of a string we are currently interpreting, we need to decide whether it's a nested symlink (in which case we need to pick the string saved back when we started to interpret that nested symlink and resume its traversal) or not (in which case we are done with link_path_walk()). Current solution is a bit of a kludge - in handling of trailing symlink (in lookup_last() and open_last_lookups() we clear nd->stack[0].name. That allows link_path_walk() to use the following rules when running out of a string to interpret: * if nd->depth is zero, we are at the end of pathname itself. * if nd->depth is positive, check the saved string; for nested symlink it will be non-NULL, for trailing symlink - NULL. It works, but it's rather non-obvious. Note that we have two sets: the set of symlinks currently being traversed and the set of postponed pathname tails. The former is stored in nd->stack[0..nd->depth-1].link and it's valid throught the pathname resolution; the latter is valid only during an individual call of link_path_walk() and it occupies nd->stack[0..nd->depth-1].name for the first call of link_path_walk() and nd->stack[1..nd->depth-1].name for subsequent ones. The kludge is basically a way to recognize the second set becoming empty. The things get simpler if we keep track of the second set's size explicitly and always store it in nd->stack[0..depth-1].name. We access the second set only inside link_path_walk(), so its size can live in a local variable; that way the check becomes trivial without the need of that kludge. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13pick_link(): check for WALK_TRAILING, not LOOKUP_PARENTAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13namei: invert the meaning of WALK_FOLLOWAl Viro
old flags & WALK_FOLLOW <=> new !(flags & WALK_TRAILING) That's what that flag had really been used for. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13sanitize handling of nd->last_type, kill LAST_BINDAl Viro
->last_type values are set in 3 places: path_init() (sets to LAST_ROOT), link_path_walk (LAST_NORM/DOT/DOTDOT) and pick_link (LAST_BIND). The are checked in walk_component(), lookup_last() and do_last(). They also get copied to the caller by filename_parentat(). In the last 3 cases the value is what we had at the return from link_path_walk(). In case of walk_component() it's either directly downstream from assignment in link_path_walk() or, when called by lookup_last(), the value we have at the return from link_path_walk(). The value at the entry into link_path_walk() can survive to return only if the pathname contains nothing but slashes. Note that pick_link() never returns such - pure jumps are handled directly. So for the calls of link_path_walk() for trailing symlinks it does not matter what value had been there at the entry; the value at the return won't depend upon it. There are 3 call chains that might have pick_link() storing LAST_BIND: 1) pick_link() from step_into() from walk_component() from link_path_walk(). In that case we will either be parsing the next component immediately after return into link_path_walk(), which will overwrite the ->last_type before anyone has a chance to look at it, or we'll fail, in which case nobody will be looking at ->last_type at all. 2) pick_link() from step_into() from walk_component() from lookup_last(). The value is never looked at due to the above; it won't affect the value seen at return from any link_path_walk(). 3) pick_link() from step_into() from do_last(). Ditto. In other words, assignemnt in pick_link() is pointless, and so is LAST_BIND itself; nothing ever looks at that value. Kill it off. And make link_path_walk() _always_ assign ->last_type - in the only case when the value at the entry might survive to the return that value is always LAST_ROOT, inherited from path_init(). Move that assignment from path_init() into the beginning of link_path_walk(), to consolidate the things. Historical note: LAST_BIND used to be used for the kludge with trailing pure jump symlinks (extra iteration through the top-level loop). No point keeping it anymore... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13finally fold get_link() into pick_link()Al Viro
kill nd->link_inode, while we are at it Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 6Al Viro
move the only remaining call of get_link() into pick_link() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 5Al Viro
move get_link() call into step_into(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 4Al Viro
Move the call of get_link() into walk_component(). Change the calling conventions for walk_component() to returning the link body to follow (if any). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 3Al Viro
After a pure jump ("/" or procfs-style symlink) we don't need to hold the link anymore. link_path_walk() dropped it if such case had been detected, lookup_last/do_last() (i.e. old trailing_symlink()) left it on the stack - it ended up calling terminate_walk() shortly anyway, which would've purged the entire stack. Do it in get_link() itself instead. Simpler logics that way... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 2Al Viro
Fold trailing_symlink() into lookup_last() and do_last(), change the calling conventions of those two. Rules change: success, we are done => NULL instead of 0 error => ERR_PTR(-E...) instead of -E... got a symlink to follow => return the path to be followed instead of 1 The loops calling those (in path_lookupat() and path_openat()) adjusted. A subtle change of control flow here: originally a pure-jump trailing symlink ("/" or procfs one) would've passed through the upper level loop once more, with "" for path to traverse. That would've brought us back to the lookup_last/do_last entry and we would've hit LAST_BIND case (LAST_BIND left from get_link() called by trailing_symlink()) and pretty much skip to the point right after where we'd left the sucker back when we picked that trailing symlink. Now we don't bother with that extra pass through the upper level loop - if get_link() says "I've just done a pure jump, nothing else to do", we just treat that as non-symlink case. Boilerplate added on that step will go away shortly - it'll migrate into walk_component() and then to step_into(), collapsing into the change of calling conventions for those. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13merging pick_link() with get_link(), part 1Al Viro
Move restoring LOOKUP_PARENT and zeroing nd->stack.name[0] past the call of get_link() (nothing _currently_ uses them in there). That allows to moved the call of may_follow_link() into get_link() as well, since now the presence of LOOKUP_PARENT distinguishes the callers from each other (link_path_walk() has it, trailing_symlink() doesn't). Preparations for folding trailing_symlink() into callers (lookup_last() and do_last()) and changing the calling conventions of those. Next stage after that will have get_link() call migrate into walk_component(), then - into step_into(). It's tricky enough to warrant doing that in stages, unfortunately... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13expand the only remaining call of path_lookup_conditional()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13LOOKUP_MOUNTPOINT: fold path_mountpointat() into path_lookupat()Al Viro
New LOOKUP flag, telling path_lookupat() to act as path_mountpointat(). IOW, traverse mounts at the final point and skip revalidation of the location where it ends up. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13fold handle_mounts() into step_into()Al Viro
The following is true: * calls of handle_mounts() and step_into() are always paired in sequences like err = handle_mounts(nd, dentry, &path, &inode, &seq); if (unlikely(err < 0)) return err; err = step_into(nd, &path, flags, inode, seq); * in all such sequences path is uninitialized before and unused after this pair of calls * in all such sequences inode and seq are unused afterwards. So the call of handle_mounts() can be shifted inside step_into(), turning 'path' into a local variable in the combined function. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13new step_into() flag: WALK_NOFOLLOWAl Viro
Tells step_into() not to follow symlinks, regardless of LOOKUP_FOLLOW. Allows to switch handle_lookup_down() to of step_into(), getting all follow_managed() and step_into() calls paired. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13step_into() callers: dismiss the symlink earlierAl Viro
We need to dismiss a symlink when we are done traversing it; currently that's done when we call step_into() for its last component. For the cases when we do not call step_into() for that component (i.e. when it's . or ..) we do the same symlink dismissal after the call of handle_dots(). What we need to guarantee is that the symlink won't be dismissed while we are still using nd->last.name - it's pointing into the body of said symlink. step_into() is sufficiently late - by the time it's called we'd already obtained the dentry, so the name we'd been looking up is no longer needed. However, it turns out to be cleaner to have that ("we are done with that component now, can dismiss the link") done explicitly - in the callers of step_into(). In handle_dots() case we won't be using the component string at all, so for . and .. the corresponding point is actually _before_ the call of handle_dots(), not after it. Fix a minor irregularity in do_last(), while we are at it - if trailing symlink ended with . or .. we forgot to dismiss it. Not a problem, since nameidata is about to be done with (neither . nor .. can be a trailing symlink, so this is the last iteration through the loop) and terminate_walk() will clean the stack anyway, but let's keep it more regular. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13lookup_fast(): take mount traversal into callersAl Viro
Current calling conventions: -E... on error, 0 on cache miss, result of handle_mounts(nd, dentry, path, inode, seqp) on success. Turn that into returning ERR_PTR(-E...), NULL and dentry resp.; deal with handle_mounts() in the callers. The thing is, they already do that in cache miss handling case, so we just need to supply dentry to them and unify the mount traversal in those cases. Fewer arguments that way, and we get closer to merging handle_mounts() and step_into(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13teach handle_mounts() to handle RCU modeAl Viro
... and make the callers of __follow_mount_rcu() use handle_mounts(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13lookup_fast(): consolidate the RCU success caseAl Viro
1) in case of __follow_mount_rcu() failure, lookup_fast() proceeds to call unlazy_child() and, should it succeed, handle_mounts(). Note that we have status > 0 (or we wouldn't be calling __follow_mount_rcu() at all), so all stuff conditional upon non-positive status won't be even touched. Consolidate just that sequence after the call of __follow_mount_rcu(). 2) calling d_is_negative() and keeping its result is pointless - we either don't get past checking ->d_seq (and don't use the results of d_is_negative() at all), or we are guaranteed that ->d_inode and type bits of ->d_flags had been consistent at the time of d_is_negative() call. IOW, we could only get to the use of its result if it's equal to !inode. The same ->d_seq check guarantees that after that point this CPU won't observe ->d_flags values older than ->d_inode update. So 'negative' variable is completely pointless these days. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-03-13afs: Fix client call Rx-phase signal handlingDavid Howells
Fix the handling of signals in client rxrpc calls made by the afs filesystem. Ignore signals completely, leaving call abandonment or connection loss to be detected by timeouts inside AF_RXRPC. Allowing a filesystem call to be interrupted after the entire request has been transmitted and an abort sent means that the server may or may not have done the action - and we don't know. It may even be worse than that for older servers. Fixes: bc5e3a546d55 ("rxrpc: Use MSG_WAITALL to tell sendmsg() to temporarily ignore signals") Signed-off-by: David Howells <dhowells@redhat.com>
2020-03-13afs: Fix handling of an abort from a service handlerDavid Howells
When an AFS service handler function aborts a call, AF_RXRPC marks the call as complete - which means that it's not going to get any more packets from the receiver. This is a problem because reception of the final ACK is what triggers afs_deliver_to_call() to drop the final ref on the afs_call object. Instead, aborted AFS service calls may then just sit around waiting for ever or until they're displaced by a new call on the same connection channel or a connection-level abort. Fix this by calling afs_set_call_complete() to finalise the afs_call struct representing the call. However, we then need to drop the ref that stops the call from being deallocated. We can do this in afs_set_call_complete(), as the work queue is holding a separate ref of its own, but then we shouldn't do it in afs_process_async_call() and afs_delete_async_call(). call->drop_ref is set to indicate that a ref needs dropping for a call and this is dealt with when we transition a call to AFS_CALL_COMPLETE. But then we also need to get rid of the ref that pins an asynchronous client call. We can do this by the same mechanism, setting call->drop_ref for an async client call too. We can also get rid of call->incoming since nothing ever sets it and only one thing ever checks it (futilely). A trace of the rxrpc_call and afs_call struct ref counting looks like: <idle>-0 [001] ..s5 164.764892: rxrpc_call: c=00000002 SEE u=3 sp=rxrpc_new_incoming_call+0x473/0xb34 a=00000000442095b5 <idle>-0 [001] .Ns5 164.766001: rxrpc_call: c=00000002 QUE u=4 sp=rxrpc_propose_ACK+0xbe/0x551 a=00000000442095b5 <idle>-0 [001] .Ns4 164.766005: rxrpc_call: c=00000002 PUT u=3 sp=rxrpc_new_incoming_call+0xa3f/0xb34 a=00000000442095b5 <idle>-0 [001] .Ns7 164.766433: afs_call: c=00000002 WAKE u=2 o=11 sp=rxrpc_notify_socket+0x196/0x33c kworker/1:2-1810 [001] ...1 164.768409: rxrpc_call: c=00000002 SEE u=3 sp=rxrpc_process_call+0x25/0x7ae a=00000000442095b5 kworker/1:2-1810 [001] ...1 164.769439: rxrpc_tx_packet: c=00000002 e9f1a7a8:95786a88:00000008:09c5 00000001 00000000 02 22 ACK CallAck kworker/1:2-1810 [001] ...1 164.769459: rxrpc_call: c=00000002 PUT u=2 sp=rxrpc_process_call+0x74f/0x7ae a=00000000442095b5 kworker/1:2-1810 [001] ...1 164.770794: afs_call: c=00000002 QUEUE u=3 o=12 sp=afs_deliver_to_call+0x449/0x72c kworker/1:2-1810 [001] ...1 164.770829: afs_call: c=00000002 PUT u=2 o=12 sp=afs_process_async_call+0xdb/0x11e kworker/1:2-1810 [001] ...2 164.771084: rxrpc_abort: c=00000002 95786a88:00000008 s=0 a=1 e=1 K-1 kworker/1:2-1810 [001] ...1 164.771461: rxrpc_tx_packet: c=00000002 e9f1a7a8:95786a88:00000008:09c5 00000002 00000000 04 00 ABORT CallAbort kworker/1:2-1810 [001] ...1 164.771466: afs_call: c=00000002 PUT u=1 o=12 sp=SRXAFSCB_ProbeUuid+0xc1/0x106 The abort generated in SRXAFSCB_ProbeUuid(), labelled "K-1", indicates that the local filesystem/cache manager didn't recognise the UUID as its own. Fixes: 2067b2b3f484 ("afs: Fix the CB.ProbeUuid service handler to reply correctly") Signed-off-by: David Howells <dhowells@redhat.com>
2020-03-13afs: Fix some tracing detailsDavid Howells
Fix a couple of tracelines to indicate the usage count after the atomic op, not the usage count before it to be consistent with other afs and rxrpc trace lines. Change the wording of the afs_call_trace_work trace ID label from "WORK" to "QUEUE" to reflect the fact that it's queueing work, not doing work. Fixes: 341f741f04be ("afs: Refcount the afs_call struct") Signed-off-by: David Howells <dhowells@redhat.com>
2020-03-13rxrpc: Fix call interruptibility handlingDavid Howells
Fix the interruptibility of kernel-initiated client calls so that they're either only interruptible when they're waiting for a call slot to come available or they're not interruptible at all. Either way, they're not interruptible during transmission. This should help prevent StoreData calls from being interrupted when writeback is in progress. It doesn't, however, handle interruption during the receive phase. Userspace-initiated calls are still interruptable. After the signal has been handled, sendmsg() will return the amount of data copied out of the buffer and userspace can perform another sendmsg() call to continue transmission. Fixes: bc5e3a546d55 ("rxrpc: Use MSG_WAITALL to tell sendmsg() to temporarily ignore signals") Signed-off-by: David Howells <dhowells@redhat.com>
2020-03-13Merge tag 'nfs-for-5.6-3' of git://git.linux-nfs.org/projects/anna/linux-nfsLinus Torvalds
Pull NFS client bugfixes from Anna Schumaker: "These are mostly fscontext fixes, but there is also one that fixes collisions seen in fscache: - Ensure the fs_context has the correct fs_type when mounting and submounting - Fix leaking of ctx->nfs_server.hostname - Add minor version to fscache key to prevent collisions" * tag 'nfs-for-5.6-3' of git://git.linux-nfs.org/projects/anna/linux-nfs: nfs: add minor version to nfs_server_key for fscache NFS: Fix leak of ctx->nfs_server.hostname NFS: Don't hard-code the fs_type when submounting NFS: Ensure the fs_context has the correct fs_type before mounting
2020-03-13Merge tag 'fuse-fixes-5.6-rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse Pull fuse fix from Miklos Szeredi: "Fix an Oops introduced in v5.4" * tag 'fuse-fixes-5.6-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse: fuse: fix stack use after return
2020-03-13Merge tag 'ovl-fixes-5.6-rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs Pull overlayfs fixes from Miklos Szeredi: "Fix three bugs introduced in this cycle" * tag 'ovl-fixes-5.6-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs: ovl: fix lockdep warning for async write ovl: fix some xino configurations ovl: fix lock in ovl_llseek()
2020-03-13btrfs: fix log context list corruption after rename whiteout errorFilipe Manana
During a rename whiteout, if btrfs_whiteout_for_rename() returns an error we can end up returning from btrfs_rename() with the log context object still in the root's log context list - this happens if 'sync_log' was set to true before we called btrfs_whiteout_for_rename() and it is dangerous because we end up with a corrupt linked list (root->log_ctxs) as the log context object was allocated on the stack. After btrfs_rename() returns, any task that is running btrfs_sync_log() concurrently can end up crashing because that linked list is traversed by btrfs_sync_log() (through btrfs_remove_all_log_ctxs()). That results in the same issue that commit e6c617102c7e4 ("Btrfs: fix log context list corruption after rename exchange operation") fixed. Fixes: d4682ba03ef618 ("Btrfs: sync log after logging new name") CC: stable@vger.kernel.org # 4.19+ Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-03-13Merge tag 'io_uring-5.6-2020-03-13' of git://git.kernel.dk/linux-blockLinus Torvalds
Pull io_uring fix from Jens Axboe: "Just a single fix here, improving the RCU callback ordering from last week. After a bit more perusing by Paul, he poked a hole in the original" * tag 'io_uring-5.6-2020-03-13' of git://git.kernel.dk/linux-block: io_uring: ensure RCU callback ordering with rcu_barrier()