summaryrefslogtreecommitdiff
path: root/io_uring/io-wq.h
diff options
context:
space:
mode:
authorPavel Begunkov <asml.silence@gmail.com>2023-09-07 13:50:07 +0100
committerJens Axboe <axboe@kernel.dk>2023-09-07 09:02:27 -0600
commit45500dc4e01c167ee063f3dcc22f51ced5b2b1e9 (patch)
treefbb32c75506c587f3506ff75b5e706f3a9784995 /io_uring/io-wq.h
parent76d3ccecfa186af3120e206d62f03db1a94a535f (diff)
io_uring: break out of iowq iopoll on teardown
io-wq will retry iopoll even when it failed with -EAGAIN. If that races with task exit, which sets TIF_NOTIFY_SIGNAL for all its workers, such workers might potentially infinitely spin retrying iopoll again and again and each time failing on some allocation / waiting / etc. Don't keep spinning if io-wq is dying. Fixes: 561fb04a6a225 ("io_uring: replace workqueue usage with io-wq") Cc: stable@vger.kernel.org Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/io-wq.h')
-rw-r--r--io_uring/io-wq.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
index 06d9ca90c577..2b2a6406dd8e 100644
--- a/io_uring/io-wq.h
+++ b/io_uring/io-wq.h
@@ -52,6 +52,7 @@ void io_wq_hash_work(struct io_wq_work *work, void *val);
int io_wq_cpu_affinity(struct io_uring_task *tctx, cpumask_var_t mask);
int io_wq_max_workers(struct io_wq *wq, int *new_count);
+bool io_wq_worker_stopped(void);
static inline bool io_wq_is_hashed(struct io_wq_work *work)
{