diff options
| author | Jens Axboe <axboe@kernel.dk> | 2025-06-13 15:24:53 -0600 |
|---|---|---|
| committer | Jens Axboe <axboe@kernel.dk> | 2025-06-13 15:26:17 -0600 |
| commit | b62e0efd8a8571460d05922862a451855ebdf3c6 (patch) | |
| tree | 39588df6d03e0622b94c9d39156f772459785386 | |
| parent | 26ec15e4b0c1d7b25214d9c0be1d50492e2f006c (diff) | |
io_uring: run local task_work from ring exit IOPOLL reaping
In preparation for needing to shift NVMe passthrough to always use
task_work for polled IO completions, ensure that those are suitably
run at exit time. See commit:
9ce6c9875f3e ("nvme: always punt polled uring_cmd end_io work to task_work")
for details on why that is necessary.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
| -rw-r--r-- | io_uring/io_uring.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 4e32f808d07d..5111ec040c53 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1523,6 +1523,9 @@ static __cold void io_iopoll_try_reap_events(struct io_ring_ctx *ctx) } } mutex_unlock(&ctx->uring_lock); + + if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) + io_move_task_work_from_local(ctx); } static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned int min_events) |
