diff options
author | Jason Gunthorpe <jgg@nvidia.com> | 2023-04-04 11:03:24 -0300 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2023-04-04 11:04:30 -0300 |
commit | 692d42d411b7db6a76382537fccbee3a12a2bcdb (patch) | |
tree | 24770529cf173188bc5d2d0d9331c0ac723b631e /io_uring/io_uring.c | |
parent | c52159b5be7894540acdc7a35791c0b57097fa4c (diff) | |
parent | 13a0d1ae7ee6b438f5537711a8c60cba00554943 (diff) |
Merge branch 'iommufd/for-rc' into for-next
The following selftest patch requires both the bug fixes and the
improvements of the selftest framework.
* iommufd/for-rc:
iommufd: Do not corrupt the pfn list when doing batch carry
iommufd: Fix unpinning of pages when an access is present
iommufd: Check for uptr overflow
Linux 6.3-rc5
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'io_uring/io_uring.c')
-rw-r--r-- | io_uring/io_uring.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index fd1cc35a1c00..722624b6d0dc 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1499,14 +1499,14 @@ void io_free_batch_list(struct io_ring_ctx *ctx, struct io_wq_work_node *node) static void __io_submit_flush_completions(struct io_ring_ctx *ctx) __must_hold(&ctx->uring_lock) { - struct io_wq_work_node *node, *prev; struct io_submit_state *state = &ctx->submit_state; + struct io_wq_work_node *node; __io_cq_lock(ctx); /* must come first to preserve CQE ordering in failure cases */ if (state->cqes_count) __io_flush_post_cqes(ctx); - wq_list_for_each(node, prev, &state->compl_reqs) { + __wq_list_for_each(node, &state->compl_reqs) { struct io_kiocb *req = container_of(node, struct io_kiocb, comp_list); |