Age | Commit message (Collapse) | Author |
|
Since commit b749965edda8 ("ublk: remove ublk_commit_and_fetch()"),
ublk_sub_req_ref() no longer uses its struct request *req argument.
So drop the argument from ublk_sub_req_ref(), and from
ublk_need_complete_req(), which only passes it to ublk_sub_req_ref().
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Link: https://lore.kernel.org/r/20250715154244.1626810-1-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-18-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Add helper ublk_handle_uring_cmd() for handling ublk command, and make
ublk_handle_cqe() more readable.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-17-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Improve all kinds of flags naming by adding its host structure suffix for
making code more readable.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-16-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Remove ublk queue self-defined flags, and use the uapi flags directly.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-15-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Pass 'ublk_thread *' to more common helpers, then we can avoid to store
this reference into 'struct ublk_io'.
Prepare for supporting to handle IO via different task context.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-14-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
'struct thread' is task local structure, and the related code will become
more readable if we pass it via parameter.
Meantime pass 'ublk_thread *' to ublk_io_alloc_sqes(), and this way is
natural since we use per-thread io_uring for handling IO.
More importantly it helps much for removing the current ubq_daemon or
per-io-task limit.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-13-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
The `tag` parameter can be figured out from cqe->user_data, and that is
also the only way to get the info, so remove `tag` parameter, and
let target code retrieve it from cqe->user_data.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-12-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Pass 'const struct ublk_io *' to ublk_[un]map_io() since just io->addr
and io->res are read in the two helpers.
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-11-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Remove ublk_commit_and_fetch() and open code request completion.
Consolidate accesses to struct ublk_io in UBLK_IO_COMMIT_AND_FETCH_REQ. When
the ublk_io daemon task restriction is relaxed in the future, ublk_io will
need to be protected by a lock. Unregister the auto-registered buffer and
complete the request last, as these don't need to happen under the lock.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-10-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Add a helper ublk_check_fetch_buf() to validate UBLK_IO_FETCH_REQ's addr.
This doesn't require access to the ublk_io, so it can be done before taking
the ublk_device mutex.
This way also fixes one missing return value of -EINVAL in case of early
failure from ublk_fetch().
Fixes: b69b8edfb27d ("ublk: properly serialize all FETCH_REQs")
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-9-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
We can share space of `io->addr` for storing auto buffer register data
and user space buffer address.
So store auto buffer register data into `struct ublk_io`.
Prepare for supporting batch IO in which many ublk IOs share single
uring_cmd, so we can't store auto buffer register data into uring_cmd
pdu.
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-8-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Move check & clearing UBLK_IO_FLAG_AUTO_BUF_REG to
ublk_handle_auto_buf_reg(), also return buffer index from this helper.
Also move ublk_set_auto_buf_reg() to this single helper too.
Add ublk_config_io_buf() for setting up ublk io buffer, covers both
ublk buffer copy or auto buffer register.
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-7-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Refactor ublk_commit_and_fetch() in the following way for removing
parameter of `struct ublksrv_io_cmd *`:
- return `struct request *` from ublk_fill_io_cmd(), so that we can
use request reference reliably in this way cause both request and
io_uring_cmd reference share same storage
- move ublk_fill_io_cmd() before calling into ublk_commit_and_fetch(),
so that ublk_fill_io_cmd() could be run with per-io lock held for
supporting command batch.
- pass ->zone_append_lba to ublk_commit_and_fetch() directly
The main motivation is to reproduce ublk_commit_and_fetch() for fetching
io command batch with multishot uring_cmd.
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-6-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Let ublk_fill_io_cmd() clear UBLK_IO_FLAG_OWNED_BY_SRV too.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-5-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Almost every block driver deals with fake timeout logic around real
request completion code.
Also the existing way may cause request reference count leak, so move the
logic into __ublk_complete_rq(), then we can skip the completion in the
last step like other drivers.
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-4-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Look up ublk process via its pid in timeout handler, so we can avoid to
touch io->task, because it is fragile to touch task structure.
It is fine to kill ublk server process and this way is simpler.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-3-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
ublk server pid(the `tgid` of the process opening the ublk device) is stored
in `ublk_device->ublksrv_tgid`. This `tgid` is then checked against the
`ublksrv_pid` in `ublk_ctrl_start_dev` and `ublk_ctrl_end_recovery`.
This ensures that correct ublk server pid is stored in device info.
Fixes: 71f28f3136af ("ublk_drv: add io_uring based userspace block driver")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250713143415.2857561-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Add tracepoints to zone write plugging plug and unplug events.
Examples for these events are:
kworker/u10:4-393 [001] d..1. 282.991660: disk_zone_wplug_add_bio: 8,0 zone 16, BIO 8388608 + 128
kworker/0:1H-58 [ [000] d..1. 283.083294: blk_zone_wplug_bio: 8,0 zone 15, BIO 7864320 + 128
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250715115324.53308-6-johannes.thumshirn@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Add a tracepoint for blkdev_zone_mgmt to trace zone management commands
submitted by higher layers like file systems or user space.
An example output for this tracepoint is as follows:
mkfs.btrfs-203 [001] ..... 42.877493: blkdev_zone_mgmt: 8,0 ZRS 5242880 + 0
This example output shows a REQ_OP_ZONE_RESET operation submitted by
mkfs.btrfs.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250715115324.53308-5-johannes.thumshirn@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Add a tracepoint in blk_zone_update_request_bio() to trace the bio sector
update on ZONE APPEND completions.
An example for this tracepoint is as follows:
<idle>-0 [001] d.h1. 381.746444: blk_zone_update_request_bio: 259,5 ZAS 131072 () 1048832 + 256 none,0,0 [swapper/1]
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250715115324.53308-4-johannes.thumshirn@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
blk_zone_update_request_bio() does two things. First it checks if the
request to be completed was written via ZONE APPEND and if yes it then
updates the sector to the one that the data was written to.
This is small enough to be an inline function. But upcoming changes adding
a tracepoint don't work if the function is inlined.
Split the function into two, the first is blk_req_bio_is_zone_append()
checking if the sector needs to be updated. This can still be an inline
function. The second is blk_zone_append_update_request_bio() doing the
sector update.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250715115324.53308-3-johannes.thumshirn@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Add zoned block commands to blk_fill_rwbs:
- ZONE APPEND will be decoded as 'ZA'
- ZONE RESET will be decoded as 'ZR'
- ZONE RESET ALL will be decoded as 'ZRA'
- ZONE FINISH will be decoded as 'ZF'
- ZONE OPEN will be decoded as 'ZO'
- ZONE CLOSE will be decoded as 'ZC'
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Link: https://lore.kernel.org/r/20250715115324.53308-2-johannes.thumshirn@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Fix Smatch-detected error:
drivers/block/floppy.c:3569 fd_locked_ioctl() error:
uninitialized symbol 'outparam'.
Smatch may incorrectly warn about uninitialized use of 'outparam'
in fd_locked_ioctl(), even though all _IOC_READ commands guarantee
its initialization. Initialize outparam to NULL to make this explicit
and suppress the false positive.
Signed-off-by: Purva Yeshi <purvayeshi550@gmail.com>
Reviewed-by: Denis Efremov <efremov@linux.com>
Link: https://lore.kernel.org/r/20250713070020.14530-1-purvayeshi550@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Syzbot came up with a reproducer where a loop device block size is
changed underneath a mounted filesystem. This causes a mismatch between
the block device block size and the block size stored in the superblock
causing confusion in various places such as fs/buffer.c. The particular
issue triggered by syzbot was a warning in __getblk_slow() due to
requested buffer size not matching block device block size.
Fix the problem by getting exclusive hold of the loop device to change
its block size. This fails if somebody (such as filesystem) has already
an exclusive ownership of the block device and thus prevents modifying
the loop device under some exclusive owner which doesn't expect it.
Reported-by: syzbot+01ef7a8da81a975e1ccd@syzkaller.appspotmail.com
Signed-off-by: Jan Kara <jack@suse.cz>
Tested-by: syzbot+01ef7a8da81a975e1ccd@syzkaller.appspotmail.com
Link: https://lore.kernel.org/r/20250711163202.19623-2-jack@suse.cz
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Not only do IOVA mappings no need the separate dma_vec tracking, it
also won't free it and thus leak the allocations.
Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Reported-by: Klara Modin <klarasmodin@gmail.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Klara Modin <klarasmodin@gmail.com>
Link: https://lore.kernel.org/r/20250711112250.633269-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
nbd grabs device lock nbd->config_lock for updating nr_hw_queues, this
ways cause the following lock dependency:
-> #2 (&disk->open_mutex){+.+.}-{4:4}:
lock_acquire kernel/locking/lockdep.c:5871 [inline]
lock_acquire+0x1ac/0x448 kernel/locking/lockdep.c:5828
__mutex_lock_common kernel/locking/mutex.c:602 [inline]
__mutex_lock+0x166/0x1292 kernel/locking/mutex.c:747
mutex_lock_nested+0x14/0x1c kernel/locking/mutex.c:799
__del_gendisk+0x132/0xac6 block/genhd.c:706
del_gendisk+0xf6/0x19a block/genhd.c:819
nbd_dev_remove+0x3c/0xf2 drivers/block/nbd.c:268
nbd_dev_remove_work+0x1c/0x26 drivers/block/nbd.c:284
process_one_work+0x96a/0x1f32 kernel/workqueue.c:3238
process_scheduled_works kernel/workqueue.c:3321 [inline]
worker_thread+0x5ce/0xde8 kernel/workqueue.c:3402
kthread+0x39c/0x7d4 kernel/kthread.c:464
ret_from_fork_kernel+0x2a/0xbb2 arch/riscv/kernel/process.c:214
ret_from_fork_kernel_asm+0x16/0x18 arch/riscv/kernel/entry.S:327
-> #1 (&set->update_nr_hwq_lock){++++}-{4:4}:
lock_acquire kernel/locking/lockdep.c:5871 [inline]
lock_acquire+0x1ac/0x448 kernel/locking/lockdep.c:5828
down_write+0x9c/0x19a kernel/locking/rwsem.c:1577
blk_mq_update_nr_hw_queues+0x3e/0xb86 block/blk-mq.c:5041
nbd_start_device+0x140/0xb2c drivers/block/nbd.c:1476
nbd_genl_connect+0xae0/0x1b24 drivers/block/nbd.c:2201
genl_family_rcv_msg_doit+0x206/0x2e6 net/netlink/genetlink.c:1115
genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline]
genl_rcv_msg+0x514/0x78e net/netlink/genetlink.c:1210
netlink_rcv_skb+0x206/0x3be net/netlink/af_netlink.c:2534
genl_rcv+0x36/0x4c net/netlink/genetlink.c:1219
netlink_unicast_kernel net/netlink/af_netlink.c:1313 [inline]
netlink_unicast+0x4f0/0x82c net/netlink/af_netlink.c:1339
netlink_sendmsg+0x85e/0xdd6 net/netlink/af_netlink.c:1883
sock_sendmsg_nosec net/socket.c:712 [inline]
__sock_sendmsg+0xcc/0x160 net/socket.c:727
____sys_sendmsg+0x63e/0x79c net/socket.c:2566
___sys_sendmsg+0x144/0x1e6 net/socket.c:2620
__sys_sendmsg+0x188/0x246 net/socket.c:2652
__do_sys_sendmsg net/socket.c:2657 [inline]
__se_sys_sendmsg net/socket.c:2655 [inline]
__riscv_sys_sendmsg+0x70/0xa2 net/socket.c:2655
syscall_handler+0x94/0x118 arch/riscv/include/asm/syscall.h:112
do_trap_ecall_u+0x396/0x530 arch/riscv/kernel/traps.c:341
handle_exception+0x146/0x152 arch/riscv/kernel/entry.S:197
-> #0 (&nbd->config_lock){+.+.}-{4:4}:
check_noncircular+0x132/0x146 kernel/locking/lockdep.c:2178
check_prev_add kernel/locking/lockdep.c:3168 [inline]
check_prevs_add kernel/locking/lockdep.c:3287 [inline]
validate_chain kernel/locking/lockdep.c:3911 [inline]
__lock_acquire+0x12b2/0x24ea kernel/locking/lockdep.c:5240
lock_acquire kernel/locking/lockdep.c:5871 [inline]
lock_acquire+0x1ac/0x448 kernel/locking/lockdep.c:5828
__mutex_lock_common kernel/locking/mutex.c:602 [inline]
__mutex_lock+0x166/0x1292 kernel/locking/mutex.c:747
mutex_lock_nested+0x14/0x1c kernel/locking/mutex.c:799
refcount_dec_and_mutex_lock+0x60/0xd8 lib/refcount.c:118
nbd_config_put+0x3a/0x610 drivers/block/nbd.c:1423
nbd_release+0x94/0x15c drivers/block/nbd.c:1735
blkdev_put_whole+0xac/0xee block/bdev.c:721
bdev_release+0x3fe/0x600 block/bdev.c:1144
blkdev_release+0x1a/0x26 block/fops.c:684
__fput+0x382/0xa8c fs/file_table.c:465
____fput+0x1c/0x26 fs/file_table.c:493
task_work_run+0x16a/0x25e kernel/task_work.c:227
resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
exit_to_user_mode_loop+0x118/0x134 kernel/entry/common.c:114
exit_to_user_mode_prepare include/linux/entry-common.h:330 [inline]
syscall_exit_to_user_mode_work include/linux/entry-common.h:414 [inline]
syscall_exit_to_user_mode include/linux/entry-common.h:449 [inline]
do_trap_ecall_u+0x3f0/0x530 arch/riscv/kernel/traps.c:355
handle_exception+0x146/0x152 arch/riscv/kernel/entry.S:197
Also it isn't necessary to require nbd->config_lock, because
blk_mq_update_nr_hw_queues() does grab tagset lock for sync everything.
Fixes the issue by releasing ->config_lock & retry in case of concurrent
updating nr_hw_queues.
Fixes: 98e68f67020c ("block: prevent adding/deleting disk during updating nr_hw_queues")
Reported-by: syzbot+2bcecf3c38cb3e8fdc8d@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/6855034f.a00a0220.137b3.0031.GAE@google.com
Reviewed-by: Yu Kuai <yukuai3@huawei.com>
Cc: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Nilay Shroff <nilay@linux.ibm.com>
Link: https://lore.kernel.org/r/20250709111744.2353050-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
With `two-primaries` enabled, DRBD tries to detect "concurrent" writes
and handle write conflicts, so that even if you write to the same sector
simultaneously on both nodes, they end up with the identical data once
the writes are completed.
In handling "superseeded" writes, we forgot a kref_get,
resulting in a premature drbd_destroy_device and use after free,
and further to kernel crashes with symptoms.
Relevance: No one should use DRBD as a random data generator, and apparently
all users of "two-primaries" handle concurrent writes correctly on layer up.
That is cluster file systems use some distributed lock manager,
and live migration in virtualization environments stops writes on one node
before starting writes on the other node.
Which means that other than for "test cases",
this code path is never taken in real life.
FYI, in DRBD 9, things are handled differently nowadays. We still detect
"write conflicts", but no longer try to be smart about them.
We decided to disconnect hard instead: upper layers must not submit concurrent
writes. If they do, that's their fault.
Signed-off-by: Sarah Newman <srn@prgmr.com>
Signed-off-by: Lars Ellenberg <lars@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Link: https://lore.kernel.org/r/20250627095728.800688-1-christoph.boehmwalder@linbit.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
The dma_map_sg() can fail and, in case of failure, returns 0. If it
fails, mtip_hw_submit_io() returns an error.
The dma_unmap_sg() requires the nents parameter to be the same as the
one passed to dma_map_sg(). This patch saves the nents in
command->scatter_ents.
Fixes: 88523a61558a ("block: Add driver for Micron RealSSD pcie flash cards")
Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20250627121123.203731-2-fourier.thomas@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
pktcdvd got killed in a previous commit, remove the reference to it as
well in the cdrom documentation.
Fixes: 1cea5180f2f8 ("block: remove pktcdvd driver")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
The current version of the blk_rq_dma_map support in nvme-pci tries to
reconstruct the DMA mappings from the on the wire descriptors if they
are needed for unmapping. While this is not the case for the direct
mapping fast path and the IOVA path, it is needed for the non-IOVA slow
path, e.g. when using the interconnect is not dma coherent, when using
swiotlb bounce buffering, or a IOMMU mapping that can't coalesce.
While the reconstruction is easy and works fine for the SGL path, where
the on the wire representation maps 1:1 to DMA mappings, the code to
reconstruct the DMA mapping ranges from PRPs can't always work, as a
given PRP layout can come from different DMA mappings, and the current
code doesn't even always get that right.
Give up on this approach and track the actual DMA mapping when actually
needed again.
Fixes: 7ce3c1dd78fc ("nvme-pci: convert the data mapping to blk_rq_dma_map")
Reported-by: Ben Copeland <ben.copeland@linaro.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Tested-by: Jens Axboe <axboe@kernel.dk>
Link: https://lore.kernel.org/r/20250707125223.3022531-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
This driver has long outlived it's utility, and it's broken and unloved.
The main use case for this was direct mount with UDF of cd-rw drives
that required 32kb packets. It would collect writes into that size and
write them out in multiples of that. That's not a common use case
anymore, the world has moved on from those kinds of media. To make
matters worse, it's actively breaking setups where it's not even
required or useful.
Link: https://lore.kernel.org/linux-block/fxg6dksau4jsk3u5xldlyo2m7qgiux6vtdrz5rywseotsouqdv@urcrwz6qtd3r/
Link: https://lore.kernel.org/linux-block/dcc4836e-6da9-4208-ad27-bbd44b3a2063@kernel.dk/
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
For performance reasons (minimizing the number of cache lines accessed
in the hot path), we store the "canceling" state redundantly - there is
one flag in the device, which can be considered the source of truth, and
per-queue copies of that flag. This redundancy can cause confusion, and
opens the door to bugs where the state is set inconsistently. Try to
guard against these bugs by introducing a ublk_set_canceling helper
which is the sole mutator of both the per-device and per-queue canceling
state. This helper always sets the state consistently. Use the helper in
all places where we need to modify the canceling state.
No functional changes are expected.
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250703-ublk_too_many_quiesce-v2-2-3527b5339eeb@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Recently, we've observed a few cases where a ublk server is able to
complete restart more quickly than the driver can process the exit of
the previous ublk server. The new ublk server comes up, attempts
recovery of the preexisting ublk devices, and observes them still in
state UBLK_S_DEV_LIVE. While this is possible due to the asynchronous
nature of io_uring cleanup and should therefore be handled properly in
the ublk server, it is still preferable to make ublk server exit
handling faster if possible, as we should strive for it to not be a
limiting factor in how fast a ublk server can restart and provide
service again.
Analysis of the issue showed that the vast majority of the time spent in
handling the ublk server exit was in calls to blk_mq_quiesce_queue,
which is essentially just a (relatively expensive) call to
synchronize_rcu. The ublk server exit path currently issues an
unnecessarily large number of calls to blk_mq_quiesce_queue, for two
reasons:
1. It tries to call blk_mq_quiesce_queue once per ublk_queue. However,
blk_mq_quiesce_queue targets the request_queue of the underlying ublk
device, of which there is only one. So the number of calls is larger
than necessary by a factor of nr_hw_queues.
2. In practice, it calls blk_mq_quiesce_queue _more_ than once per
ublk_queue. This is because of a data race where we read
ubq->canceling without any locking when deciding if we should call
ublk_start_cancel. It is thus possible for two calls to
ublk_uring_cmd_cancel_fn against the same ublk_queue to both call
ublk_start_cancel against the same ublk_queue.
Fix this by making the "canceling" flag a per-device state. This
actually matches the existing code better, as there are several places
where the flag is set or cleared for all queues simultaneously, and
there is the general expectation that cancellation corresponds with ublk
server exit. This per-device canceling flag is then checked under a
(new) lock (addressing the data race (2) above), and the queue is only
quiesced if it is cleared (addressing (1) above). The result is just one
call to blk_mq_quiesce_queue per ublk device.
To minimize the number of cache lines that are accessed in the hot path,
the per-queue canceling flag is kept. The values of the per-device
canceling flag and all per-queue canceling flags should always match.
In our setup, where one ublk server handles I/O for 128 ublk devices,
each having 24 hardware queues of depth 4096, here are the results
before and after this patch, where teardown time is measured from the
first call to io_ring_ctx_wait_and_kill to the return from the last
ublk_ch_release:
before after
number of calls to blk_mq_quiesce_queue: 6469 256
teardown time: 11.14s 2.44s
There are still some potential optimizations here, but this takes care
of a big chunk of the ublk server exit handling delay.
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250703-ublk_too_many_quiesce-v2-1-3527b5339eeb@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
In most cases zcomp_available_show() is the only emitting
function that is called from sysfs read() handler, so it
assumes that there is a whole PAGE_SIZE buffer to work with.
There is an exception, however: recomp_algorithm_show().
In recomp_algorithm_show() we prepend the buffer with
priority number before we pass it to zcomp_available_show(),
so it cannot assume PAGE_SIZE anymore and must take
recomp_algorithm_show() modifications into consideration.
Therefore we need to pass buffer offset to zcomp_available_show().
Also convert it to use sysfs_emit_at(), to stay aligned
with the rest of zram's sysfs read() handlers.
On practice we are never even close to using the whole PAGE_SIZE
buffer, so that's not a critical bug, but still.
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Link: https://lore.kernel.org/r/20250627071840.1394242-1-senozhatsky@chromium.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Replace scnprintf() with sysfs_emit() or sysfs_emit_at() in sysfs
*_show() functions in zram_drv.c to follow the kernel's guidelines
from Documentation/filesystems/sysfs.rst.
This improves consistency, safety, and makes the code easier to
maintain and update in the future.
Signed-off-by: Rahul Kumar <rk0006818@gmail.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Link: https://lore.kernel.org/r/20250627035256.1120740-1-rk0006818@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Retrieve a folio from the page cache instead of a page. Removes a hidden
call to compound_head(). Then be sure to call folio_put() instead of
put_page() to release it. That doesn't save any calls to
compound_head(), just moves them around.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Coly Li <colyli@kernel.org>
Acked-back: Coly Li <colyli@kernel.org>
Link: https://lore.kernel.org/r/20250702024848.343370-1-colyli@kernel.org
[axboe: commit message massaging]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
The calculation of the upper limit for queues does not depend solely on
the number of possible CPUs; for example, the isolcpus kernel
command-line option must also be considered.
To account for this, the block layer provides a helper function to
retrieve the maximum number of queues. Use it to set an appropriate
upper queue number limit.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20250617-isolcpus-queue-counters-v1-5-13923686b54b@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
The calculation of the upper limit for queues does not depend solely on
the number of online CPUs; for example, the isolcpus kernel
command-line option must also be considered.
To account for this, the block layer provides a helper function to
retrieve the maximum number of queues. Use it to set an appropriate
upper queue number limit.
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20250617-isolcpus-queue-counters-v1-4-13923686b54b@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
The calculation of the upper limit for queues does not depend solely on
the number of possible CPUs; for example, the isolcpus kernel
command-line option must also be considered.
To account for this, the block layer provides a helper function to
retrieve the maximum number of queues. Use it to set an appropriate
upper queue number limit.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20250617-isolcpus-queue-counters-v1-3-13923686b54b@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Add two variants of helper functions that calculate the correct number
of queues to use. Two variants are needed because some drivers base
their maximum number of queues on the possible CPU mask, while others
use the online CPU mask.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20250617-isolcpus-queue-counters-v1-2-13923686b54b@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
group_cpu_evenly() might have allocated less groups then requested:
group_cpu_evenly()
__group_cpus_evenly()
alloc_nodes_groups()
# allocated total groups may be less than numgrps when
# active total CPU number is less then numgrps
In this case, the caller will do an out of bound access because the
caller assumes the masks returned has numgrps.
Return the number of groups created so the caller can limit the access
range accordingly.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250617-isolcpus-queue-counters-v1-1-13923686b54b@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
struct ublk_io is already 56 bytes on 64-bit architectures, so round it
up to a full cache line (typically 64 bytes). This ensures a single
ublk_io doesn't span multiple cache lines and prevents false sharing if
consecutive ublk_io's are accessed by different daemon tasks.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-15-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
ublk_get_req_ref() and ublk_put_req_ref() currently call
ublk_need_req_ref(ubq) to check whether the ublk device features require
reference counting of its requests. However, all callers already know
that reference counting is required:
- __ublk_check_and_get_req() is only called from
ublk_check_and_get_req() if user copy is enabled, and from
ublk_register_io_buf() if zero copy is enabled
- ublk_io_release() is only called for requests registered by
ublk_register_io_buf(), which requires zero copy
- ublk_ch_read_iter() and ublk_ch_write_iter() only call
ublk_put_req_ref() if ublk_check_and_get_req() succeeded, which
requires user copy to be enabled
So drop the ublk_need_req_ref() check and the ubq argument in
ublk_get_req_ref() and ublk_put_req_ref().
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-14-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
ublk_io_release() performs an expensive atomic refcount decrement. This
atomic operation is unnecessary in the common case where the request's
buffer is registered and unregistered on the daemon task before handling
UBLK_IO_COMMIT_AND_FETCH_REQ for the I/O. So if ublk_io_release() is
called on the daemon task and task_registered_buffers is positive, just
decrement task_registered_buffers (nonatomically). ublk_sub_req_ref()
will apply this decrement when it atomically subtracts from io->ref.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-13-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
ublk_register_io_buf() performs an expensive atomic refcount increment,
as well as a lot of pointer chasing to look up the struct request.
Create a separate ublk_daemon_register_io_buf() for the daemon task to
call. Initialize ublk_io's reference count to a large number, introduce
a field task_registered_buffers to count the buffers registered on the
daemon task, and atomically subtract the large number minus
task_registered_buffers in ublk_commit_and_fetch().
Also obtain the struct request directly from ublk_io's req field instead
of looking it up on the tagset.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-12-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Make the unlikely case blk_should_fake_timeout() return early to reduce
the indentation of the successful path.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-11-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Currently, UBLK_IO_REGISTER_IO_BUF and UBLK_IO_UNREGISTER_IO_BUF are
only permitted on the ublk_io's daemon task. But this restriction is
unnecessary. ublk_register_io_buf() calls __ublk_check_and_get_req() to
look up the request from the tagset and atomically take a reference on
the request without accessing the ublk_io. ublk_unregister_io_buf()
doesn't use the q_id or tag at all.
So allow these opcodes even on tasks other than io->task.
Handle UBLK_IO_UNREGISTER_IO_BUF before obtaining the ubq and io since
the buffer index being unregistered is not necessarily related to the
specified q_id and tag.
Add a feature flag UBLK_F_BUF_REG_OFF_DAEMON that userspace can use to
determine whether the kernel supports off-daemon buffer registration.
Suggested-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-10-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
UBLK_IO_UNREGISTER_IO_BUF currently requires a valid q_id and tag to be
passed in the ublksrv_io_cmd. However, only the addr (registered buffer
index) is actually used to unregister the buffer. There is no check that
the q_id and tag are for the ublk request whose buffer is registered at
the given index. To prepare to allow userspace to omit the q_id and tag,
check the UBLK_F_SUPPORT_ZERO_COPY flag on the ublk_device instead of
the ublk_queue.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-9-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
UBLK_IO_FLAG_ACTIVE and UBLK_IO_FLAG_OWNED_BY_SRV are mutually
exclusive. So just check that UBLK_IO_FLAG_OWNED_BY_SRV is set in
__ublk_ch_uring_cmd(); that implies UBLK_IO_FLAG_ACTIVE is unset.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20250620151008.3976463-7-csander@purestorage.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|