summaryrefslogtreecommitdiff
path: root/fs/io_uring.c
AgeCommit message (Collapse)Author
2021-04-11io_uring: don't alter iopoll reissue fail ret codePavel Begunkov
When reissue_prep failed in io_complete_rw_iopoll(), we change return code to -EIO to prevent io_iopoll_complete() from doing resubmission. Mark requests with a new flag (i.e. REQ_F_DONT_REISSUE) instead and retain the original return value. It also removes io_rw_reissue() from io_iopoll_complete() that will be used later. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: optimise kiocb_end_write for !ISREGPavel Begunkov
file_end_write() is only for regular files, so the function do a couple of dereferences to get inode and check for it. However, we already have REQ_F_ISREG at hand, just use it and inline file_end_write(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: kill unused REQ_F_NO_FILE_TABLEPavel Begunkov
current->files are always valid now even for io-wq threads, so kill not used anymore REQ_F_NO_FILE_TABLE. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: don't init req->work fully in advancePavel Begunkov
req->work is mostly unused unless it's punted, and io_init_req() is too hot for fully initialising it. Fortunately, we can skip init work.next as it's controlled by io-wq, and can not touch work.flags by moving everything related into io_prep_async_work(). The only field left is req->work.creds, but there is nothing can be done, keep maintaining it. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: remove tctx->sqpollPavel Begunkov
struct io_uring_task::sqpoll is not used anymore, kill it Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: don't do extra EXITING cancellationsPavel Begunkov
io_match_task() matches all requests with PF_EXITING task, even though those may be valid requests. It was necessary for SQPOLL cancellation, but now it kills all requests before exiting via io_uring_cancel_sqpoll(), so it's not needed. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: don't clear REQ_F_LINK_TIMEOUTPavel Begunkov
REQ_F_LINK_TIMEOUT is a hint that to look for linked timeouts to cancel, we're leaving it even when it's already fired. Hence don't care to clear it in io_kill_linked_timeout(), it's safe and is called only once. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: optimise io_req_task_work_add()Pavel Begunkov
Inline io_task_work_add() into io_req_task_work_add(). They both work with a request, so keeping them separate doesn't make things much more clear, but merging allows optimise it. Apart from small wins like not reading req->ctx or not calculating @notify in the hot path, i.e. with tctx->task_state set, it avoids doing wake_up_process() for every single add, but only after actually done task_work_add(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: abolish old io_put_file()Pavel Begunkov
io_put_file() doesn't do a good job at generating a good code. Inline it, so we can check REQ_F_FIXED_FILE first, prioritising FIXED_FILE case over requests without files, and saving a memory load in that case. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: optimise io_dismantle_req() fast pathPavel Begunkov
Reshuffle io_dismantle_req() checks to put most of slow path stuff under a single if. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: inline io_clean_op()'s fast pathPavel Begunkov
Inline io_clean_op(), leaving __io_clean_op() but renaming it. This will be used in following patches. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: remove __io_req_task_cancel()Pavel Begunkov
Both io_req_complete_failed() and __io_req_task_cancel() do the same thing: set failure flag, put both req refs and emit an CQE. The former one is a bit more advance as it puts req back into a req cache, so make it to take over __io_req_task_cancel() and remove the last one. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: add helper flushing locked_free_listPavel Begunkov
Add a new helper io_flush_cached_locked_reqs() that splices locked_free_list to free_list, and does it right doing all sync and invariant reinit. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: refactor io_free_req_deferred()Pavel Begunkov
We don't care about ret value in io_free_req_deferred(), make the code a bit more concise. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: inline io_put_req and friendsPavel Begunkov
One big omission is that io_put_req() haven't been marked inline, and at least gcc 9 doesn't inline it, not to mention that it's really hot and extra function call is intolerable, especially when it doesn't put a final ref. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: refactor rsrc refnode allocationPavel Begunkov
There are two problems: 1) we always allocate refnodes in advance and free them if those haven't been used. It's expensive, takes two allocations, where one of them is percpu. And it may be pretty common not actually using them. 2) Current API with allocating a refnode and setting some of the fields is error prone, we don't ever want to have a file node runninng fixed buffer callback... Solve both with pre-init/get API. Pre-init just leaves the node for later if not used, and for get (i.e. io_rsrc_refnode_get()), you need to explicitly pass all arguments setting callbacks/etc., so it's more resilient. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: refactor io_flush_cached_reqs()Pavel Begunkov
Emphasize that return value of io_flush_cached_reqs() depends on number of requests in the cache. It looks nicer and might help tools from false-negative analyses. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: optimise success case of __io_queue_sqePavel Begunkov
Move the case of successfully issued request by doing that check first. It's not much of a difference, just generates slightly better code for me. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: inline __io_queue_linked_timeout()Pavel Begunkov
Inline __io_queue_linked_timeout(), we don't need it Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: keep io_req_free_batch() call localityPavel Begunkov
Don't do a function call (io_dismantle_req()) in the middle and place it to near other function calls, otherwise may lead to excessive register spilling. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: optimise tctx node checks/allocPavel Begunkov
First of all, w need to set tctx->sqpoll only when we add a new entry into ->xa, so move it from the hot path. Also extract a hot path for io_uring_add_task_file() as an inline helper. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: optimise io_uring_enter()Pavel Begunkov
Add unlikely annotations, because my compiler pretty much mispredicts every first check, and apart jumping around in the fast path, it also generates extra instructions, like in advance setting ret value. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: don't take ctx refs in task_work handlerPavel Begunkov
__tctx_task_work() guarantees that ctx won't be killed while running task_works, so we can remove now unnecessary ctx pinning for internally armed polling. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: transform ret == 0 for poll cancelation completionsJens Axboe
We can set canceled == true and complete out-of-line, ensure that we catch that and correctly return -ECANCELED if the poll operation got canceled. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: correct comment on poll vs iopollJens Axboe
The correct function is io_iopoll_complete(), which deals with completions of IOPOLL requests, not io_poll_complete(). Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: cache async and regular file state for fixed filesJens Axboe
We have to dig quite deep to check for particularly whether or not a file supports a fast-path nonblock attempt. For fixed files, we can do this lookup once and cache the state instead. This adds two new bits to track whether we support async read/write attempt, and lines up the REQ_F_ISREG bit with those two. The file slot re-uses the last 3 (or 2, for 32-bit) of the file pointer to cache that state, and then we mask it in when we go and use a fixed file. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: don't check for io_uring_fops for fixed filesJens Axboe
We don't allow them at registration time, so limit the check for needing inflight tracking in io_file_get() to the non-fixed path. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: simplify io_sqd_update_thread_idle()Pavel Begunkov
Use a more comprehensible() max instead of hand coding it with ifs in io_sqd_update_thread_idle(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: switch to atomic_t for io_kiocb reference countJens Axboe
io_uring manipulates references twice for each request, and hence is very sensitive to performance of the reference count. This commit borrows a trick from: commit f958d7b528b1b40c44cfda5eabe2d82760d868c3 Author: Linus Torvalds <torvalds@linux-foundation.org> Date: Thu Apr 11 10:06:20 2019 -0700 mm: make page ref count overflow check tighter and more explicit and switches to atomic_t for references, while still retaining overflow and underflow checks. This is good for a 2-3% increase in peak IOPS on a single core. Before: IOPS=2970879, IOS/call=31/31, inflight=128 (128) IOPS=2952597, IOS/call=31/31, inflight=128 (128) IOPS=2943904, IOS/call=31/31, inflight=128 (128) IOPS=2930006, IOS/call=31/31, inflight=96 (96) and after: IOPS=3054354, IOS/call=31/31, inflight=128 (128) IOPS=3059038, IOS/call=31/31, inflight=128 (128) IOPS=3060320, IOS/call=31/31, inflight=128 (128) IOPS=3068256, IOS/call=31/31, inflight=96 (96) Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: wrap io_kiocb reference count manipulation in helpersJens Axboe
No functional changes in this patch, just in preparation for handling the references a bit more efficiently. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: simplify io_resubmit_prep()Pavel Begunkov
If not for async_data NULL check, io_resubmit_prep() is already an rw specific version of io_req_prep_async(), but slower because 1) it always goes through io_import_iovec() even if following io_setup_async_rw() the result 2) instead of initialising iovec/iter in-place it does it on-stack and then copies with io_setup_async_rw(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: merge defer_prep() and prep_async()Pavel Begunkov
Merge two function and do renaming in favour of the second one, it relays the meaning better. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: rethink def->needs_async_dataPavel Begunkov
needs_async_data controls allocation of async_data, and used in two cases. 1) when async setup requires it (by io_req_prep_async() or handler themselves), and 2) when op always needs additional space to operate, like timeouts do. Opcode preps already don't bother about the second case and do allocation unconditionally, restrict needs_async_data to the first case only and rename it into needs_async_setup. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> [axboe: update for IOPOLL fix] Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: untie alloc_async_data and needs_async_dataPavel Begunkov
All opcode handlers pretty well know whether they need async data or not, and can skip testing for needs_async_data. The exception is rw the generic path, but those test the flag by hand anyway. So, check the flag and make io_alloc_async_data() allocating unconditionally. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: refactor out send/recv async setupPavel Begunkov
IORING_OP_[SEND,RECV] don't need async setup neither will get into io_req_prep_async(). Remove them from io_req_prep_async() and remove needs_async_data checks from the related setup functions. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: use better types for cflagsPavel Begunkov
__io_cqring_fill_event() takes cflags as long to squeeze it into u32 in an CQE, awhile all users pass int or unsigned. Replace it with unsigned int and store it as u32 in struct io_completion to match CQE. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: refactor provide/remove buffer lockingPavel Begunkov
Always complete request holding the mutex instead of doing that strange dancing with conditional ordering. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: add a helper failing not issued requestsPavel Begunkov
Add a simple helper doing CQE posting, marking request for link-failure, and putting both submission and completion references. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: further deduplicate file slot selectionPavel Begunkov
io_fixed_file_slot() and io_file_from_index() behave pretty similarly, DRY and call one from another. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: reuse io_req_task_queue_fail()Pavel Begunkov
Use io_req_task_queue_fail() on the fail path of io_req_task_queue(). It's unlikely to happen, so don't care about additional overhead, but allows to keep all the req->result invariant in a single function. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-11io_uring: avoid taking ctx refs for task-cancelPavel Begunkov
Don't bother to take a ctx->refs for io_req_task_cancel() because it take uring_lock before putting a request, and the context is promised to stay alive until unlock happens. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-08io_uring: fix rw req completionPavel Begunkov
WARNING: at fs/io_uring.c:8578 io_ring_exit_work.cold+0x0/0x18 As reissuing is now passed back by REQ_F_REISSUE and kiocb_done() internally uses __io_complete_rw(), it may stop after setting the flag so leaving a dangling request. There are tricky edge cases, e.g. reading beyound file, boundary, so the easiest way is to hand code reissue in kiocb_done() as __io_complete_rw() was doing for us before. Fixes: 230d50d448ac ("io_uring: move reissue into regular IO path") Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/f602250d292f8a84cca9a01d747744d1e797be26.1617842918.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-07io_uring: clear F_REISSUE right after getting itPavel Begunkov
There are lots of ways r/w request may continue its path after getting REQ_F_REISSUE, it's not necessarily io-wq and can be, e.g. apoll, and submitted via io_async_task_func() -> __io_req_task_submit() Clear the flag right after getting it, so the next attempt is well prepared regardless how the request will be executed. Fixes: 230d50d448ac ("io_uring: move reissue into regular IO path") Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/11dcead939343f4e27cab0074d34afcab771bfa4.1617842918.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-02io_uring: fix !CONFIG_BLOCK compilation failureJens Axboe
kernel test robot correctly pinpoints a compilation failure if CONFIG_BLOCK isn't set: fs/io_uring.c: In function '__io_complete_rw': >> fs/io_uring.c:2509:48: error: implicit declaration of function 'io_rw_should_reissue'; did you mean 'io_rw_reissue'? [-Werror=implicit-function-declaration] 2509 | if ((res == -EAGAIN || res == -EOPNOTSUPP) && io_rw_should_reissue(req)) { | ^~~~~~~~~~~~~~~~~~~~ | io_rw_reissue cc1: some warnings being treated as errors Ensure that we have a stub declaration of io_rw_should_reissue() for !CONFIG_BLOCK. Fixes: 230d50d448ac ("io_uring: move reissue into regular IO path") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-02io_uring: move reissue into regular IO pathJens Axboe
It's non-obvious how retry is done for block backed files, when it happens off the kiocb done path. It also makes it tricky to deal with the iov_iter handling. Just mark the req as needing a reissue, and handling it from the submission path instead. This makes it directly obvious that we're not re-importing the iovec from userspace past the submit point, and it means that we can just reuse our usual -EAGAIN retry path from the read/write handling. At some point in the future, we'll gain the ability to always reliably return -EAGAIN through the stack. A previous attempt on the block side didn't pan out and got reverted, hence the need to check for this information out-of-band right now. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-01io_uring: fix EIOCBQUEUED iter revertPavel Begunkov
iov_iter_revert() is done in completion handlers that happensf before read/write returns -EIOCBQUEUED, no need to repeat reverting afterwards. Moreover, even though it may appear being just a no-op, it's actually races with 1) user forging a new iovec of a different size 2) reissue, that is done via io-wq continues completely asynchronously. Fixes: 3e6a0d3c7571c ("io_uring: fix -EAGAIN retry with IOPOLL") Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-01io_uring/io-wq: protect against sprintf overflowPavel Begunkov
task_pid may be large enough to not fit into the left space of TASK_COMM_LEN-sized buffers and overflow in sprintf. We not so care about uniqueness, so replace it with safer snprintf(). Reported-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/1702c6145d7e1c46fbc382f28334c02e1a3d3994.1617267273.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-04-01io_uring: don't mark S_ISBLK async work as unboundedJens Axboe
S_ISBLK is marked as unbounded work for async preparation, because it doesn't match S_ISREG. That is incorrect, as any read/write to a block device is also a bounded operation. Fix it up and ensure that S_ISBLK isn't marked unbounded. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-30io_uring: drop sqd lock before handling signals for SQPOLLJens Axboe
Don't call into get_signal() with the sqd mutex held, it'll fail if we're freezing the task and we'll get complaints on locks still being held: ==================================== WARNING: iou-sqp-8386/8387 still has locks held! 5.12.0-rc4-syzkaller #0 Not tainted ------------------------------------ 1 lock held by iou-sqp-8386/8387: #0: ffff88801e1d2470 (&sqd->lock){+.+.}-{3:3}, at: io_sq_thread+0x24c/0x13a0 fs/io_uring.c:6731 stack backtrace: CPU: 1 PID: 8387 Comm: iou-sqp-8386 Not tainted 5.12.0-rc4-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x141/0x1d7 lib/dump_stack.c:120 try_to_freeze include/linux/freezer.h:66 [inline] get_signal+0x171a/0x2150 kernel/signal.c:2576 io_sq_thread+0x8d2/0x13a0 fs/io_uring.c:6748 Fold the get_signal() case in with the parking checks, as we need to drop the lock in both cases, and since we need to be checking for parking when juggling the lock anyway. Reported-by: syzbot+796d767eb376810256f5@syzkaller.appspotmail.com Fixes: dbe1bdbb39db ("io_uring: handle signals for IO threads like a normal thread") Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-29io_uring: handle setup-failed ctx in kill_timeoutsPavel Begunkov
general protection fault, probably for non-canonical address 0xdffffc0000000018: 0000 [#1] KASAN: null-ptr-deref in range [0x00000000000000c0-0x00000000000000c7] RIP: 0010:io_commit_cqring+0x37f/0xc10 fs/io_uring.c:1318 Call Trace: io_kill_timeouts+0x2b5/0x320 fs/io_uring.c:8606 io_ring_ctx_wait_and_kill+0x1da/0x400 fs/io_uring.c:8629 io_uring_create fs/io_uring.c:9572 [inline] io_uring_setup+0x10da/0x2ae0 fs/io_uring.c:9599 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xae It can get into wait_and_kill() before setting up ctx->rings, and hence io_commit_cqring() fails. Mimic poll cancel and do it only when we completed events, there can't be any requests if it failed before initialising rings. Fixes: 80c4cbdb5ee60 ("io_uring: do post-completion chore on t-out cancel") Reported-by: syzbot+0e905eb8228070c457a0@syzkaller.appspotmail.com Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/660261a48f0e7abf260c8e43c87edab3c16736fa.1617014345.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>