diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-08-02 13:20:44 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-08-02 13:20:44 -0700 |
| commit | b349b1181d24af1c151134a3c39725e94a5619dd (patch) | |
| tree | 7347cc4035de947c22e575ac7c649c0fa8658dd1 /io_uring/statx.c | |
| parent | efb2883060afc79638bb1eb19e2c30e7f6c5a178 (diff) | |
| parent | f6b543fd03d347e8bf245cee4f2d54eb6ffd8fcb (diff) | |
Merge tag 'for-5.20/io_uring-2022-07-29' of git://git.kernel.dk/linux-block
Pull io_uring updates from Jens Axboe:
- As per (valid) complaint in the last merge window, fs/io_uring.c has
grown quite large these days. io_uring isn't really tied to fs
either, as it supports a wide variety of functionality outside of
that.
Move the code to io_uring/ and split it into files that either
implement a specific request type, and split some code into helpers
as well. The code is organized a lot better like this, and io_uring.c
is now < 4K LOC (me).
- Deprecate the epoll_ctl opcode. It'll still work, just trigger a
warning once if used. If we don't get any complaints on this, and I
don't expect any, then we can fully remove it in a future release
(me).
- Improve the cancel hash locking (Hao)
- kbuf cleanups (Hao)
- Efficiency improvements to the task_work handling (Dylan, Pavel)
- Provided buffer improvements (Dylan)
- Add support for recv/recvmsg multishot support. This is similar to
the accept (or poll) support for have for multishot, where a single
SQE can trigger everytime data is received. For applications that
expect to do more than a few receives on an instantiated socket, this
greatly improves efficiency (Dylan).
- Efficiency improvements for poll handling (Pavel)
- Poll cancelation improvements (Pavel)
- Allow specifiying a range for direct descriptor allocations (Pavel)
- Cleanup the cqe32 handling (Pavel)
- Move io_uring types to greatly cleanup the tracing (Pavel)
- Tons of great code cleanups and improvements (Pavel)
- Add a way to do sync cancelations rather than through the sqe -> cqe
interface, as that's a lot easier to use for some use cases (me).
- Add support to IORING_OP_MSG_RING for sending direct descriptors to a
different ring. This avoids the usually problematic SCM case, as we
disallow those. (me)
- Make the per-command alloc cache we use for apoll generic, place
limits on it, and use it for netmsg as well (me).
- Various cleanups (me, Michal, Gustavo, Uros)
* tag 'for-5.20/io_uring-2022-07-29' of git://git.kernel.dk/linux-block: (172 commits)
io_uring: ensure REQ_F_ISREG is set async offload
net: fix compat pointer in get_compat_msghdr()
io_uring: Don't require reinitable percpu_ref
io_uring: fix types in io_recvmsg_multishot_overflow
io_uring: Use atomic_long_try_cmpxchg in __io_account_mem
io_uring: support multishot in recvmsg
net: copy from user before calling __get_compat_msghdr
net: copy from user before calling __copy_msghdr
io_uring: support 0 length iov in buffer select in compat
io_uring: fix multishot ending when not polled
io_uring: add netmsg cache
io_uring: impose max limit on apoll cache
io_uring: add abstraction around apoll cache
io_uring: move apoll cache to poll.c
io_uring: consolidate hash_locked io-wq handling
io_uring: clear REQ_F_HASH_LOCKED on hash removal
io_uring: don't race double poll setting REQ_F_ASYNC_DATA
io_uring: don't miss setting REQ_F_DOUBLE_POLL
io_uring: disable multishot recvmsg
io_uring: only trace one of complete or overflow
...
Diffstat (limited to 'io_uring/statx.c')
| -rw-r--r-- | io_uring/statx.c | 73 |
1 files changed, 73 insertions, 0 deletions
diff --git a/io_uring/statx.c b/io_uring/statx.c new file mode 100644 index 000000000000..6056cd7f4876 --- /dev/null +++ b/io_uring/statx.c @@ -0,0 +1,73 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <linux/kernel.h> +#include <linux/errno.h> +#include <linux/file.h> +#include <linux/io_uring.h> + +#include <uapi/linux/io_uring.h> + +#include "../fs/internal.h" + +#include "io_uring.h" +#include "statx.h" + +struct io_statx { + struct file *file; + int dfd; + unsigned int mask; + unsigned int flags; + struct filename *filename; + struct statx __user *buffer; +}; + +int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) +{ + struct io_statx *sx = io_kiocb_to_cmd(req); + const char __user *path; + + if (sqe->buf_index || sqe->splice_fd_in) + return -EINVAL; + if (req->flags & REQ_F_FIXED_FILE) + return -EBADF; + + sx->dfd = READ_ONCE(sqe->fd); + sx->mask = READ_ONCE(sqe->len); + path = u64_to_user_ptr(READ_ONCE(sqe->addr)); + sx->buffer = u64_to_user_ptr(READ_ONCE(sqe->addr2)); + sx->flags = READ_ONCE(sqe->statx_flags); + + sx->filename = getname_flags(path, + getname_statx_lookup_flags(sx->flags), + NULL); + + if (IS_ERR(sx->filename)) { + int ret = PTR_ERR(sx->filename); + + sx->filename = NULL; + return ret; + } + + req->flags |= REQ_F_NEED_CLEANUP; + return 0; +} + +int io_statx(struct io_kiocb *req, unsigned int issue_flags) +{ + struct io_statx *sx = io_kiocb_to_cmd(req); + int ret; + + if (issue_flags & IO_URING_F_NONBLOCK) + return -EAGAIN; + + ret = do_statx(sx->dfd, sx->filename, sx->flags, sx->mask, sx->buffer); + io_req_set_res(req, ret, 0); + return IOU_OK; +} + +void io_statx_cleanup(struct io_kiocb *req) +{ + struct io_statx *sx = io_kiocb_to_cmd(req); + + if (sx->filename) + putname(sx->filename); +} |
