summaryrefslogtreecommitdiff
path: root/drivers/vhost
AgeCommit message (Collapse)Author
4 daysMerge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds
Pull virtio updates from Michael Tsirkin: - A new virtio RTC driver - vhost scsi now logs write descriptors so migration works - Some hardening work in virtio core - An old spec compliance issue fixed in vhost net - A couple of cleanups, fixes in vringh, virtio-pci, vdpa * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: virtio: reject shm region if length is zero virtio_rtc: Add RTC class driver virtio_rtc: Add Arm Generic Timer cross-timestamping virtio_rtc: Add PTP clocks virtio_rtc: Add module and driver core vringh: use bvec_kmap_local vhost: vringh: Use matching allocation type in resize_iovec() virtio-pci: Fix result size returned for the admin command completion vdpa/octeon_ep: Control PCI dev enabling manually vhost-scsi: log event queue write descriptors vhost-scsi: log control queue write descriptors vhost-scsi: log I/O queue write descriptors vhost-scsi: adjust vhost_scsi_get_desc() to log vring descriptors vhost: modify vhost_log_write() for broader users
6 daysvringh: use bvec_kmap_localChristoph Hellwig
Use the bvec_kmap_local helper rather than digging into the bvec internals. Signed-off-by: Christoph Hellwig <hch@lst.de> Message-Id: <20250501142244.2888227-1-hch@lst.de> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
6 daysvhost: vringh: Use matching allocation type in resize_iovec()Kees Cook
In preparation for making the kmalloc family of allocators type aware, we need to make sure that the returned type from the allocation matches the type of the variable being assigned. (Before, the allocator would always return "void *", which can be implicitly cast to any pointer type.) The assigned type is "struct kvec *", but the returned type will be "struct iovec *". These have the same allocation size, so there is no bug: struct kvec { void *iov_base; /* and that should *never* hold a userland pointer */ size_t iov_len; }; struct iovec { void __user *iov_base; /* BSD uses caddr_t (1003.1g requires void *) */ __kernel_size_t iov_len; /* Must be size_t (1003.1g) */ }; Adjust the allocation type to match the assignment. Signed-off-by: Kees Cook <kees@kernel.org> Message-Id: <20250426062214.work.334-kees@kernel.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-05-18vhost-scsi: log event queue write descriptorsDongli Zhang
Log write descriptors for the event queue, leveraging vhost_get_vq_desc() to retrieve the array of write descriptors to obtain the log buffer. There is only one path for event queue. Suggested-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Reviewed-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20250403063028.16045-9-dongli.zhang@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-05-18vhost-scsi: log control queue write descriptorsDongli Zhang
Log write descriptors for the control queue, leveraging vhost_scsi_get_desc() and vhost_get_vq_desc() to retrieve the array of write descriptors to obtain the log buffer. For Task Management Requests, similar to the I/O queue, store the log buffer during the submission path and log it in the completion or error handling path. For Asynchronous Notifications, only the submission path is involved. Suggested-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Message-Id: <20250403063028.16045-8-dongli.zhang@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Mike Christie <michael.christie@oracle.com>
2025-05-18vhost-scsi: log I/O queue write descriptorsDongli Zhang
Log write descriptors for the I/O queue, leveraging vhost_scsi_get_desc() and vhost_get_vq_desc() to retrieve the array of write descriptors to obtain the log buffer. In addition, introduce a vhost-scsi specific function to log vring descriptors. In this function, the 'partial' argument is set to false, and the 'len' argument is set to 0, because vhost-scsi always logs all pages shared by a vring descriptor. Add WARN_ON_ONCE() since vhost-scsi doesn't support VIRTIO_F_ACCESS_PLATFORM. The per-cmd log buffer is allocated on demand in the submission path after VHOST_F_LOG_ALL is set. Return -ENOMEM on allocation failure, in order to send SAM_STAT_TASK_SET_FULL to the guest. It isn't reclaimed in the completion path. Instead, it is reclaimed when VHOST_F_LOG_ALL is removed, or during VHOST_SCSI_SET_ENDPOINT when all commands are destroyed. Store the log buffer during the submission path and log it in the completion path. Logging is also required in the error handling path of the submission process. Suggested-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Message-Id: <20250403063028.16045-7-dongli.zhang@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Mike Christie <michael.christie@oracle.com>
2025-05-18vhost-scsi: adjust vhost_scsi_get_desc() to log vring descriptorsDongli Zhang
Adjust vhost_scsi_get_desc() to facilitate logging of vring descriptors. Add new arguments to allow passing the log buffer and length to vhost_get_vq_desc(). In addition, reset 'log_num' since vhost_get_vq_desc() may reset it only after certain condition checks. Suggested-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Reviewed-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20250403063028.16045-6-dongli.zhang@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-05-18vhost: modify vhost_log_write() for broader usersDongli Zhang
Currently, the only user of vhost_log_write() is vhost-net. The 'len' argument prevents logging of pages that are not tainted by the RX path. Adjustments are needed since more drivers (i.e. vhost-scsi) begin using vhost_log_write(). So far vhost-net RX path may only partially use pages shared via the last vring descriptor. Unlike vhost-net, vhost-scsi always logs all pages shared via vring descriptors. To accommodate this, use (len == U64_MAX) to indicate whether the driver would log all pages of vring descriptors, or only pages that are tainted by the driver. In addition, removes BUG(). Suggested-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Message-Id: <20250403063028.16045-5-dongli.zhang@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-05-05vhost/net: Defer TX queue re-enable until after sendmsgJon Kohler
In handle_tx_copy, TX batching processes packets below ~PAGE_SIZE and batches up to 64 messages before calling sock->sendmsg. Currently, when there are no more messages on the ring to dequeue, handle_tx_copy re-enables kicks on the ring *before* firing off the batch sendmsg. However, sock->sendmsg incurs a non-zero delay, especially if it needs to wake up a thread (e.g., another vhost worker). If the guest submits additional messages immediately after the last ring check and disablement, it triggers an EPT_MISCONFIG vmexit to attempt to kick the vhost worker. This may happen while the worker is still processing the sendmsg, leading to wasteful exit(s). This is particularly problematic for single-threaded guest submission threads, as they must exit, wait for the exit to be processed (potentially involving a TTWU), and then resume. In scenarios like a constant stream of UDP messages, this results in a sawtooth pattern where the submitter frequently vmexits, and the vhost-net worker alternates between sleeping and waking. A common solution is to configure vhost-net busy polling via userspace (e.g., qemu poll-us). However, treating the sendmsg as the "busy" period by keeping kicks disabled during the final sendmsg and performing one additional ring check afterward provides a significant performance improvement without any excess busy poll cycles. If messages are found in the ring after the final sendmsg, requeue the TX handler. This ensures fairness for the RX handler and allows vhost_run_work_list to cond_resched() as needed. Test Case TX VM: taskset -c 2 iperf3 -c rx-ip-here -t 60 -p 5200 -b 0 -u -i 5 RX VM: taskset -c 2 iperf3 -s -p 5200 -D 6.12.0, each worker backed by tun interface with IFF_NAPI setup. Note: TCP side is largely unchanged as that was copy bound 6.12.0 unpatched EPT_MISCONFIG/second: 5411 Datagrams/second: ~382k Interval Transfer Bitrate Lost/Total Datagrams 0.00-30.00 sec 15.5 GBytes 4.43 Gbits/sec 0/11481630 (0%) sender 6.12.0 patched EPT_MISCONFIG/second: 58 (~93x reduction) Datagrams/second: ~650k (~1.7x increase) Interval Transfer Bitrate Lost/Total Datagrams 0.00-30.00 sec 26.4 GBytes 7.55 Gbits/sec 0/19554720 (0%) sender Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Jon Kohler <jon@nutanix.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Link: https://patch.msgid.link/20250501020428.1889162-1-jon@nutanix.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-04-18vhost-scsi: Fix vhost_scsi_send_status()Dongli Zhang
Although the support of VIRTIO_F_ANY_LAYOUT + VIRTIO_F_VERSION_1 was signaled by the commit 664ed90e621c ("vhost/scsi: Set VIRTIO_F_ANY_LAYOUT + VIRTIO_F_VERSION_1 feature bits"), vhost_scsi_send_bad_target() still assumes the response in a single descriptor. Similar issue in vhost_scsi_send_bad_target() has been fixed in previous commit. In addition, similar issue for vhost_scsi_complete_cmd_work() has been fixed by the commit 6dd88fd59da8 ("vhost-scsi: unbreak any layout for response"). Fixes: 3ca51662f818 ("vhost-scsi: Add better resource allocation failure handling") Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20250403063028.16045-4-dongli.zhang@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-04-18vhost-scsi: Fix vhost_scsi_send_bad_target()Dongli Zhang
Although the support of VIRTIO_F_ANY_LAYOUT + VIRTIO_F_VERSION_1 was signaled by the commit 664ed90e621c ("vhost/scsi: Set VIRTIO_F_ANY_LAYOUT + VIRTIO_F_VERSION_1 feature bits"), vhost_scsi_send_bad_target() still assumes the response in a single descriptor. In addition, although vhost_scsi_send_bad_target() is used by both I/O queue and control queue, the response header is always virtio_scsi_cmd_resp. It is required to use virtio_scsi_ctrl_tmf_resp or virtio_scsi_ctrl_an_resp for control queue. Fixes: 664ed90e621c ("vhost/scsi: Set VIRTIO_F_ANY_LAYOUT + VIRTIO_F_VERSION_1 feature bits") Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20250403063028.16045-3-dongli.zhang@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-04-18vhost-scsi: protect vq->log_used with vq->mutexDongli Zhang
The vhost-scsi completion path may access vq->log_base when vq->log_used is already set to false. vhost-thread QEMU-thread vhost_scsi_complete_cmd_work() -> vhost_add_used() -> vhost_add_used_n() if (unlikely(vq->log_used)) QEMU disables vq->log_used via VHOST_SET_VRING_ADDR. mutex_lock(&vq->mutex); vq->log_used = false now! mutex_unlock(&vq->mutex); QEMU gfree(vq->log_base) log_used() -> log_write(vq->log_base) Assuming the VMM is QEMU. The vq->log_base is from QEMU userpace and can be reclaimed via gfree(). As a result, this causes invalid memory writes to QEMU userspace. The control queue path has the same issue. Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20250403063028.16045-2-dongli.zhang@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-04-01Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds
Pull virtio updates from Michael Tsirkin: "A small number of improvements all over the place: - shutdown has been reworked to reset devices - virtio fs is now allowed in vduse - vhost-scsi memory use has been reduced - cleanups, fixes all over the place A couple more fixes are being tested and will be merged after rc1" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: vhost-scsi: Reduce response iov mem use vhost-scsi: Allocate iov_iter used for unaligned copies when needed vhost-scsi: Stop duplicating se_cmd fields vhost-scsi: Dynamically allocate scatterlists vhost-scsi: Return queue full for page alloc failures during copy vhost-scsi: Add better resource allocation failure handling vhost-scsi: Allocate T10 PI structs only when enabled vhost-scsi: Reduce mem use by moving upages to per queue vduse: add virtio_fs to allowed dev id sound/virtio: Fix cancel_sync warnings on uninitialized work_structs vdpa/mlx5: Fix oversized null mkey longer than 32bit vdpa/mlx5: Fix mlx5_vdpa_get_config() endianness on big-endian machines vhost-scsi: Fix handling of multiple calls to vhost_scsi_set_endpoint tools: virtio/linux/module.h add MODULE_DESCRIPTION() define. tools: virtio/linux/compiler.h: Add data_race() define. tools/virtio: Add DMA_MAPPING_ERROR and sg_dma_len api define for virtio test virtio: break and reset virtio devices on device_shutdown()
2025-03-01vhost: return task creation error instead of NULLKeith Busch
Lets callers distinguish why the vhost task creation failed. No one currently cares why it failed, so no real runtime change from this patch, but that will not be the case for long. Signed-off-by: Keith Busch <kbusch@kernel.org> Message-ID: <20250227230631.303431-2-kbusch@meta.com> Reviewed-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-25vhost-scsi: Reduce response iov mem useMike Christie
We have to save N iov entries to copy the virtio_scsi_cmd_resp struct back to the guest's buffer. The difficulty is that we can't assume the virtio_scsi_cmd_resp will be in 1 iov because older virtio specs allowed you to break it up. The worst case is that the guest was doing something like breaking up the virtio_scsi_cmd_resp struct into 108 (the struct is 108 bytes) byte sized vecs like: iov[0].iov_base = ((unsigned char *)virtio_scsi_cmd_resp)[0] iov[0].iov_len = 1 iov[1].iov_base = ((unsigned char *)virtio_scsi_cmd_resp)[1] iov[1].iov_len = 1 .... iov[107].iov_base = ((unsigned char *)virtio_scsi_cmd_resp)[107] iov[1].iov_len = 1 Right now we allocate UIO_MAXIOV vecs which is 1024 and so for a small device with just 1 queue and 128 commands per queue, we are wasting 1.8 MB = (1024 current entries - 108) * 16 bytes per entry * 128 cmds The most common case is going to be where the initiator puts the entire virtio_scsi_cmd_resp in the first iov and does not split it. We've always done it this way for Linux and the windows driver looks like it's always done the same. It's highly unlikely anyone has ever split the response and if they did it might just be where they have the sense in a second iov but that doesn't seem likely as well. So to optimize for the common implementation, this has us only pre-allocate the single iovec. If we do hit the split up response case this has us allocate the needed iovec when needed. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20241203191705.19431-9-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2025-02-25vhost-scsi: Allocate iov_iter used for unaligned copies when neededMike Christie
It's extremely rare that we get unaligned requests that need to drop down to the data copy code path. However, the iov_iter is almost 5% of the mem used for the vhost_scsi_cmd. This patch has us allocate the iov_iter only when needed since it's not a perf path that uses the struct. This along with the patches that removed the duplicated fields on the vhost_scsd_cmd allow us to reduce mem use by 1 MB in mid size setups where we have 16 virtqueues and are doing 1024 cmds per queue. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20241203191705.19431-8-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2025-02-25vhost-scsi: Stop duplicating se_cmd fieldsMike Christie
When setting up the command we will initially set values like lun and data direction on the vhost scsi command. We then pass them to LIO which stores them again on the LIO se_cmd. The se_cmd is actually stored in the vhost scsi command so we are storing these values twice on the same struct. So this patch has stop duplicating the storing of SCSI values like lun, data dir, data len, cdb, etc on the vhost scsi command and just pass them to LIO which will store them on the se_cmd. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20241203191705.19431-7-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2025-02-25vhost-scsi: Dynamically allocate scatterlistsMike Christie
We currently preallocate scatterlists which have 2048 entries for each command. For a small device with just 1 queue this results in: 8 MB = 32 bytes per sg * 2048 entries * 128 cmd When mq is turned on and we increase the virtqueue_size so we can handle commands from multiple queues in parallel, then this can sky rocket. This patch allows us to dynamically allocate the scatterlist like is done with drivers like NVMe and SCSI. For small IO (4-16K) IOPs testing, we didn't see any regressions, but for throughput testing we sometimes saw a 2-5% regression when the backend device was very fast (8 NVMe drives in a MD RAID0 config or a memory backed device). As a result this patch makes the dynamic allocation feature a modparam so userspace can decide how it wants to balance mem use and perf. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20241203191705.19431-6-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2025-02-25vhost-scsi: Return queue full for page alloc failures during copyMike Christie
This has us return queue full if we can't allocate a page during the copy operation so the initiator can retry. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20241203191705.19431-5-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2025-02-25vhost-scsi: Add better resource allocation failure handlingMike Christie
If we can't allocate mem to map in data for a request or can't find a tag for a command, we currently drop the command. This leads to the error handler running to clean it up. Instead of dropping the command this has us return an error telling the initiator that it queued more commands than we can handle. The initiator will then reduce how many commands it will send us and retry later. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20241203191705.19431-4-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2025-02-25vhost-scsi: Allocate T10 PI structs only when enabledMike Christie
T10 PI is not a widely used feature. This has us only allocate the structs for it if the feature has been enabled. For a common small setup where you have 1 virtqueue and 128 commands per queue, this saves: 8MB = 32 bytes per sg * 2048 entries * 128 commands per vhost-scsi device. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20241203191705.19431-3-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2025-02-25vhost-scsi: Reduce mem use by moving upages to per queueMike Christie
Each worker thread can process 1 command at a time so there's no need to allocate a upages array per cmd. This patch moves it to per queue. Even a small device with 128 cmds and 1 queue this brings mem use for the array from 2 MB = 8 bytes per page pointer * 2048 pointers * 128 cmds to 16K = 8 bytes per pointer * 2048 * 1 queue Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20241203191705.19431-2-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
2025-02-25vhost-scsi: Fix handling of multiple calls to vhost_scsi_set_endpointMike Christie
If vhost_scsi_set_endpoint is called multiple times without a vhost_scsi_clear_endpoint between them, we can hit multiple bugs found by Haoran Zhang: 1. Use-after-free when no tpgs are found: This fixes a use after free that occurs when vhost_scsi_set_endpoint is called more than once and calls after the first call do not find any tpgs to add to the vs_tpg. When vhost_scsi_set_endpoint first finds tpgs to add to the vs_tpg array match=true, so we will do: vhost_vq_set_backend(vq, vs_tpg); ... kfree(vs->vs_tpg); vs->vs_tpg = vs_tpg; If vhost_scsi_set_endpoint is called again and no tpgs are found match=false so we skip the vhost_vq_set_backend call leaving the pointer to the vs_tpg we then free via: kfree(vs->vs_tpg); vs->vs_tpg = vs_tpg; If a scsi request is then sent we do: vhost_scsi_handle_vq -> vhost_scsi_get_req -> vhost_vq_get_backend which sees the vs_tpg we just did a kfree on. 2. Tpg dir removal hang: This patch fixes an issue where we cannot remove a LIO/target layer tpg (and structs above it like the target) dir due to the refcount dropping to -1. The problem is that if vhost_scsi_set_endpoint detects a tpg is already in the vs->vs_tpg array or if the tpg has been removed so target_depend_item fails, the undepend goto handler will do target_undepend_item on all tpgs in the vs_tpg array dropping their refcount to 0. At this time vs_tpg contains both the tpgs we have added in the current vhost_scsi_set_endpoint call as well as tpgs we added in previous calls which are also in vs->vs_tpg. Later, when vhost_scsi_clear_endpoint runs it will do target_undepend_item on all the tpgs in the vs->vs_tpg which will drop their refcount to -1. Userspace will then not be able to remove the tpg and will hang when it tries to do rmdir on the tpg dir. 3. Tpg leak: This fixes a bug where we can leak tpgs and cause them to be un-removable because the target name is overwritten when vhost_scsi_set_endpoint is called multiple times but with different target names. The bug occurs if a user has called VHOST_SCSI_SET_ENDPOINT and setup a vhost-scsi device to target/tpg mapping, then calls VHOST_SCSI_SET_ENDPOINT again with a new target name that has tpgs we haven't seen before (target1 has tpg1 but target2 has tpg2). When this happens we don't teardown the old target tpg mapping and just overwrite the target name and the vs->vs_tpg array. Later when we do vhost_scsi_clear_endpoint, we are passed in either target1 or target2's name and we will only match that target's tpgs when we loop over the vs->vs_tpg. We will then return from the function without doing target_undepend_item on the tpgs. Because of all these bugs, it looks like being able to call vhost_scsi_set_endpoint multiple times was never supported. The major user, QEMU, already has checks to prevent this use case. So to fix the issues, this patch prevents vhost_scsi_set_endpoint from being called if it's already successfully added tpgs. To add, remove or change the tpg config or target name, you must do a vhost_scsi_clear_endpoint first. Fixes: 25b98b64e284 ("vhost scsi: alloc cmds per vq instead of session") Fixes: 4f7f46d32c98 ("tcm_vhost: Use vq->private_data to indicate if the endpoint is setup") Reported-by: Haoran Zhang <wh1sper@zju.edu.cn> Closes: https://lore.kernel.org/virtualization/e418a5ee-45ca-4d18-9b5d-6f8b6b1add8e@oracle.com/T/#me6c0041ce376677419b9b2563494172a01487ecb Signed-off-by: Mike Christie <michael.christie@oracle.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20250129210922.121533-1-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefano Garzarella <sgarzare@redhat.com>
2025-01-27vhost/net: Set num_buffers for virtio 1.0Akihiko Odaki
The specification says the device MUST set num_buffers to 1 if VIRTIO_NET_F_MRG_RXBUF has not been negotiated. Fixes: 41e3e42108bc ("vhost/net: enable virtio 1.0") Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-Id: <20240915-v1-v1-1-f10d2cb5e759@daynix.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-11-11mm: page_frag: avoid caller accessing 'page_frag_cache' directlyYunsheng Lin
Use appropriate frag_page API instead of caller accessing 'page_frag_cache' directly. CC: Andrew Morton <akpm@linux-foundation.org> CC: Linux-MM <linux-mm@kvack.org> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Acked-by: Chuck Lever <chuck.lever@oracle.com> Link: https://patch.msgid.link/20241028115343.3405838-5-linyunsheng@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-07Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds
Pull virtio fixes from Michael Tsirkin: "Several small bugfixes all over the place. Most notably, fixes the vsock allocation with GFP_KERNEL in atomic context, which has been triggering warnings for lots of testers" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: vhost/scsi: null-ptr-dereference in vhost_scsi_get_req() vsock/virtio: use GFP_ATOMIC under RCU read lock virtio_console: fix misc probe bugs virtio_ring: tag event_triggered as racy for KCSAN vdpa/octeon_ep: Fix format specifier for pointers in debug messages
2024-10-07vhost/scsi: null-ptr-dereference in vhost_scsi_get_req()Haoran Zhang
Since commit 3f8ca2e115e5 ("vhost/scsi: Extract common handling code from control queue handler") a null pointer dereference bug can be triggered when guest sends an SCSI AN request. In vhost_scsi_ctl_handle_vq(), `vc.target` is assigned with `&v_req.tmf.lun[1]` within a switch-case block and is then passed to vhost_scsi_get_req() which extracts `vc->req` and `tpg`. However, for a `VIRTIO_SCSI_T_AN_*` request, tpg is not required, so `vc.target` is set to NULL in this branch. Later, in vhost_scsi_get_req(), `vc->target` is dereferenced without being checked, leading to a null pointer dereference bug. This bug can be triggered from guest. When this bug occurs, the vhost_worker process is killed while holding `vq->mutex` and the corresponding tpg will remain occupied indefinitely. Below is the KASAN report: Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN NOPTI KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] CPU: 1 PID: 840 Comm: poc Not tainted 6.10.0+ #1 Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 RIP: 0010:vhost_scsi_get_req+0x165/0x3a0 Code: 00 fc ff df 48 89 fa 48 c1 ea 03 80 3c 02 00 0f 85 2b 02 00 00 48 b8 00 00 00 00 00 fc ff df 4d 8b 65 30 4c 89 e2 48 c1 ea 03 <0f> b6 04 02 4c 89 e2 83 e2 07 38 d0 7f 08 84 c0 0f 85 be 01 00 00 RSP: 0018:ffff888017affb50 EFLAGS: 00010246 RAX: dffffc0000000000 RBX: ffff88801b000000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff888017affcb8 RBP: ffff888017affb80 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: ffff888017affc88 R14: ffff888017affd1c R15: ffff888017993000 FS: 000055556e076500(0000) GS:ffff88806b100000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000200027c0 CR3: 0000000010ed0004 CR4: 0000000000370ef0 Call Trace: <TASK> ? show_regs+0x86/0xa0 ? die_addr+0x4b/0xd0 ? exc_general_protection+0x163/0x260 ? asm_exc_general_protection+0x27/0x30 ? vhost_scsi_get_req+0x165/0x3a0 vhost_scsi_ctl_handle_vq+0x2a4/0xca0 ? __pfx_vhost_scsi_ctl_handle_vq+0x10/0x10 ? __switch_to+0x721/0xeb0 ? __schedule+0xda5/0x5710 ? __kasan_check_write+0x14/0x30 ? _raw_spin_lock+0x82/0xf0 vhost_scsi_ctl_handle_kick+0x52/0x90 vhost_run_work_list+0x134/0x1b0 vhost_task_fn+0x121/0x350 ... </TASK> ---[ end trace 0000000000000000 ]--- Let's add a check in vhost_scsi_get_req. Fixes: 3f8ca2e115e5 ("vhost/scsi: Extract common handling code from control queue handler") Signed-off-by: Haoran Zhang <wh1sper@zju.edu.cn> [whitespace fixes] Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <b26d7ddd-b098-4361-88f8-17ca7f90adf7@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-10-02move asm/unaligned.h to linux/unaligned.hAl Viro
asm/unaligned.h is always an include of asm-generic/unaligned.h; might as well move that thing to linux/unaligned.h and include that - there's nothing arch-specific in that header. auto-generated by the following: for i in `git grep -l -w asm/unaligned.h`; do sed -i -e "s/asm\/unaligned.h/linux\/unaligned.h/" $i done for i in `git grep -l -w asm-generic/unaligned.h`; do sed -i -e "s/asm-generic\/unaligned.h/linux\/unaligned.h/" $i done git mv include/asm-generic/unaligned.h include/linux/unaligned.h git mv tools/include/asm-generic/unaligned.h tools/include/linux/unaligned.h sed -i -e "/unaligned.h/d" include/asm-generic/Kbuild sed -i -e "s/__ASM_GENERIC/__LINUX/" include/linux/unaligned.h tools/include/linux/unaligned.h
2024-09-26Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds
Pull virtio updates from Michael Tsirkin: "Several new features here: - virtio-balloon supports new stats - vdpa supports setting mac address - vdpa/mlx5 suspend/resume as well as MKEY ops are now faster - virtio_fs supports new sysfs entries for queue info - virtio/vsock performance has been improved And fixes, cleanups all over the place" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: (34 commits) vsock/virtio: avoid queuing packets when intermediate queue is empty vsock/virtio: refactor virtio_transport_send_pkt_work fw_cfg: Constify struct kobj_type vdpa/mlx5: Postpone MR deletion vdpa/mlx5: Introduce init/destroy for MR resources vdpa/mlx5: Rename mr_mtx -> lock vdpa/mlx5: Extract mr members in own resource struct vdpa/mlx5: Rename function vdpa/mlx5: Delete direct MKEYs in parallel vdpa/mlx5: Create direct MKEYs in parallel MAINTAINERS: add virtio-vsock driver in the VIRTIO CORE section virtio_fs: add sysfs entries for queue information virtio_fs: introduce virtio_fs_put_locked helper vdpa: Remove unused declarations vdpa/mlx5: Parallelize VQ suspend/resume for CVQ MQ command vdpa/mlx5: Small improvement for change_num_qps() vdpa/mlx5: Keep notifiers during suspend but ignore vdpa/mlx5: Parallelize device resume vdpa/mlx5: Parallelize device suspend vdpa/mlx5: Use async API for vq modify commands ...
2024-09-10vhost_vdpa: assign irq bypass producer token correctlyJason Wang
We used to call irq_bypass_unregister_producer() in vhost_vdpa_setup_vq_irq() which is problematic as we don't know if the token pointer is still valid or not. Actually, we use the eventfd_ctx as the token so the life cycle of the token should be bound to the VHOST_SET_VRING_CALL instead of vhost_vdpa_setup_vq_irq() which could be called by set_status(). Fixing this by setting up irq bypass producer's token when handling VHOST_SET_VRING_CALL and un-registering the producer before calling vhost_vring_ioctl() to prevent a possible use after free as eventfd could have been released in vhost_vring_ioctl(). And such registering and unregistering will only be done if DRIVER_OK is set. Reported-by: Dragos Tatulea <dtatulea@nvidia.com> Tested-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Fixes: 2cf1ba9a4d15 ("vhost_vdpa: implement IRQ offloading in vhost_vdpa") Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20240816031900.18013-1-jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-08-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR. No conflicts or adjacent changes. Link: https://patch.msgid.link/20240808170148.3629934-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-08-06Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds
Pull virtio fix from Michael Tsirkin: "Fix a single, long-standing issue with kick pass-through vdpa" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: vhost-vdpa: switch to use vmf_insert_pfn() in the fault handler
2024-08-02vsock/virtio: add SIOCOUTQ support for all virtio based transportsLuigi Leonardi
Introduce support for virtio_transport_unsent_bytes ioctl for virtio_transport, vhost_vsock and vsock_loopback. For all transports the unsent bytes counter is incremented in virtio_transport_get_credit. In virtio_transport (G2H) and in vhost-vsock (H2G) the counter is decremented when the skbuff is consumed. In vsock_loopback the same skbuff is passed from the transmitter to the receiver, so the counter is decremented before queuing the skbuff to the receiver. Signed-off-by: Luigi Leonardi <luigi.leonardi@outlook.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-07-26vhost-vdpa: switch to use vmf_insert_pfn() in the fault handlerJason Wang
remap_pfn_page() should not be called in the fault handler as it may change the vma->flags which may trigger lockdep warning since the vma write lock is not held. Actually there's no need to modify the vma->flags as it has been set in the mmap(). So this patch switches to use vmf_insert_pfn() instead. Reported-by: Dragos Tatulea <dtatulea@nvidia.com> Tested-by: Dragos Tatulea <dtatulea@nvidia.com> Fixes: ddd89d0a059d ("vhost_vdpa: support doorbell mapping via mmap") Cc: stable@vger.kernel.org Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20240701033159.18133-1-jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>
2024-07-19Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds
Pull virtio updates from Michael Tsirkin: "Several new features here: - Virtio find vqs API has been reworked (required to fix the scalability issue we have with adminq, which I hope to merge later in the cycle) - vDPA driver for Marvell OCTEON - virtio fs performance improvement - mlx5 migration speedups Fixes, cleanups all over the place" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: (56 commits) virtio: rename virtio_find_vqs_info() to virtio_find_vqs() virtio: remove unused virtio_find_vqs() and virtio_find_vqs_ctx() helpers virtio: convert the rest virtio_find_vqs() users to virtio_find_vqs_info() virtio_balloon: convert to use virtio_find_vqs_info() virtiofs: convert to use virtio_find_vqs_info() scsi: virtio_scsi: convert to use virtio_find_vqs_info() virtio_net: convert to use virtio_find_vqs_info() virtio_crypto: convert to use virtio_find_vqs_info() virtio_console: convert to use virtio_find_vqs_info() virtio_blk: convert to use virtio_find_vqs_info() virtio: rename find_vqs_info() op to find_vqs() virtio: remove the original find_vqs() op virtio: call virtio_find_vqs_info() from virtio_find_single_vq() directly virtio: convert find_vqs() op implementations to find_vqs_info() virtio_pci: convert vp_*find_vqs() ops to find_vqs_info() virtio: introduce virtio_queue_info struct and find_vqs_info() config op virtio: make virtio_find_single_vq() call virtio_find_vqs() virtio: make virtio_find_vqs() call virtio_find_vqs_ctx() caif_virtio: use virtio_find_single_vq() for single virtqueue finding vdpa/mlx5: Don't enable non-active VQs in .set_vq_ready() ...
2024-07-09vringh: add MODULE_DESCRIPTION()Jeff Johnson
Fix the allmodconfig 'make w=1' issue: WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/vhost/vringh.o Signed-off-by: Jeff Johnson <quic_jjohnson@quicinc.com> Message-Id: <20240516-md-vringh-v1-1-31bf37779a5a@quicinc.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2024-07-09vhost: move smp_rmb() into vhost_get_avail_idx()Michael S. Tsirkin
All callers of vhost_get_avail_idx() use smp_rmb() to order the available ring entry read and avail_idx read. Make vhost_get_avail_idx() call smp_rmb() itself whenever the avail_idx is accessed. This way, the callers don't need to worry about the memory barrier. As a side benefit, we also validate the index on all paths now, which will hopefully help prevent/catch earlier future bugs. Note that current code is inconsistent in how the errors are handled. They are treated as an empty ring in some places, but as non-empty ring in other places. This patch doesn't attempt to change the existing behaviour. No functional change intended. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Acked-by: Will Deacon <will@kernel.org> Message-Id: <20240429232748.642356-1-gshan@redhat.com>
2024-07-04vhost/vsock: always initialize seqpacket_allowMichael S. Tsirkin
There are two issues around seqpacket_allow: 1. seqpacket_allow is not initialized when socket is created. Thus if features are never set, it will be read uninitialized. 2. if VIRTIO_VSOCK_F_SEQPACKET is set and then cleared, then seqpacket_allow will not be cleared appropriately (existing apps I know about don't usually do this but it's legal and there's no way to be sure no one relies on this). To fix: - initialize seqpacket_allow after allocation - set it unconditionally in set_features Reported-by: syzbot+6c21aeb59d0e82eb2782@syzkaller.appspotmail.com Reported-by: Jeongjun Park <aha310510@gmail.com> Fixes: ced7b713711f ("vhost/vsock: support SEQPACKET for transport"). Tested-by: Arseny Krasnov <arseny.krasnov@kaspersky.com> Cc: David S. Miller <davem@davemloft.net> Cc: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20240422100010-mutt-send-email-mst@kernel.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jakub Kicinski <kuba@kernel.org>
2024-07-04vhost-vdpa: Use iommu_paging_domain_alloc()Lu Baolu
Replace iommu_domain_alloc() with iommu_paging_domain_alloc(). Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240610085555.88197-5-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
2024-05-23Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostLinus Torvalds
Pull virtio updates from Michael Tsirkin: "Several new features here: - virtio-net is finally supported in vduse - virtio (balloon and mem) interaction with suspend is improved - vhost-scsi now handles signals better/faster And fixes, cleanups all over the place" * tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: (48 commits) virtio-pci: Check if is_avq is NULL virtio: delete vq in vp_find_vqs_msix() when request_irq() fails MAINTAINERS: add Eugenio Pérez as reviewer vhost-vdpa: Remove usage of the deprecated ida_simple_xx() API vp_vdpa: don't allocate unused msix vectors sound: virtio: drop owner assignment fuse: virtio: drop owner assignment scsi: virtio: drop owner assignment rpmsg: virtio: drop owner assignment nvdimm: virtio_pmem: drop owner assignment wifi: mac80211_hwsim: drop owner assignment vsock/virtio: drop owner assignment net: 9p: virtio: drop owner assignment net: virtio: drop owner assignment net: caif: virtio: drop owner assignment misc: nsm: drop owner assignment iommu: virtio: drop owner assignment drm/virtio: drop owner assignment gpio: virtio: drop owner assignment firmware: arm_scmi: virtio: drop owner assignment ...
2024-05-22vhost-vdpa: Remove usage of the deprecated ida_simple_xx() APIChristophe JAILLET
ida_alloc() and ida_free() should be preferred to the deprecated ida_simple_get() and ida_simple_remove(). Note that the upper limit of ida_simple_get() is exclusive, but the one of ida_alloc_max() is inclusive. So a -1 has been added when needed. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Simon Horman <horms@kernel.org> Message-Id: <67c2edf49788c27d5f7a49fc701520b9fcf739b5.1713088999.git.christophe.jaillet@wanadoo.fr> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
2024-05-22vhost_task: Handle SIGKILL by flushing work and exitingMike Christie
Instead of lingering until the device is closed, this has us handle SIGKILL by: 1. marking the worker as killed so we no longer try to use it with new virtqueues and new flush operations. 2. setting the virtqueue to worker mapping so no new works are queued. 3. running all the exiting works. Suggested-by: Edward Adam Davis <eadavis@qq.com> Reported-and-tested-by: syzbot+98edc2df894917b3431f@syzkaller.appspotmail.com Message-Id: <tencent_546DA49414E876EEBECF2C78D26D242EE50A@qq.com> Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20240316004707.45557-9-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-05-22vhost: Release worker mutex during flushesMike Christie
In the next patches where the worker can be killed while in use, we need to be able to take the worker mutex and kill queued works for new IO and flushes, and set some new flags to prevent new __vhost_vq_attach_worker calls from swapping in/out killed workers. If we are holding the worker mutex during a flush and the flush's work is still in the queue, the worker code that will handle the SIGKILL cleanup won't be able to take the mutex and perform it's cleanup. So this patch has us drop the worker mutex while waiting for the flush to complete. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20240316004707.45557-8-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-05-22vhost: Use virtqueue mutex for swapping workerMike Christie
__vhost_vq_attach_worker uses the vhost_dev mutex to serialize the swapping of a virtqueue's worker. This was done for simplicity because we are already holding that mutex. In the next patches where the worker can be killed while in use, we need finer grained locking because some drivers will hold the vhost_dev mutex while flushing. However in the SIGKILL handler in the next patches, we will need to be able to swap workers (set current one to NULL), kill queued works and stop new flushes while flushes are in progress. To prepare us, this has us use the virtqueue mutex for swapping workers instead of the vhost_dev one. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20240316004707.45557-7-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-05-22vhost_scsi: Handle vhost_vq_work_queue failures for TMFsMike Christie
vhost_vq_work_queue will never fail when queueing the TMF's response handling because a guest can only send us TMFs when the device is fully setup so there is always a worker at that time. In the next patches we will modify the worker code so it handles SIGKILL by exiting before outstanding commands/TMFs have sent their responses. In that case vhost_vq_work_queue can fail when we try to send a response. This has us just free the TMF's resources since at this time the guest won't be able to get a response even if we could send it. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20240316004707.45557-6-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-05-22vhost: Remove vhost_vq_flushMike Christie
vhost_vq_flush is no longer used so remove it. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20240316004707.45557-5-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-05-22vhost-scsi: Use system wq to flush dev for TMFsMike Christie
We flush all the workers that are not also used by the ctl vq to make sure that responses queued by LIO before the TMF response are sent before the TMF response. This requires a special vhost_vq_flush function which, in the next patches where we handle SIGKILL killing workers while in use, will require extra locking/complexity. To avoid that, this patch has us flush the entire device from the system work queue, then queue up sending the response from there. This is a little less optimal since we now flush all workers but this will be ok since commands have already timed out and perf is not a concern. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20240316004707.45557-4-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-05-22vhost-scsi: Handle vhost_vq_work_queue failures for cmdsMike Christie
In the next patches we will support the vhost_task being killed while in use. The problem for vhost-scsi is that we can't free some structs until we get responses for commands we have submitted to the target layer and we currently process the responses from the vhost_task. This has just drop the responses and free the command's resources. When all commands have completed then operations like flush will be woken up and we can complete device release and endpoint cleanup. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20240316004707.45557-3-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-05-22vhost-scsi: Handle vhost_vq_work_queue failures for eventsMike Christie
Currently, we can try to queue an event's work before the vhost_task is created. When this happens we just drop it in vhost_scsi_do_plug before even calling vhost_vq_work_queue. During a device shutdown we do the same thing after vhost_scsi_clear_endpoint has cleared the backends. In the next patches we will be able to kill the vhost_task before we have cleared the endpoint. In that case, vhost_vq_work_queue can fail and we will leak the event's memory. This has handle the failure by just freeing the event. This is safe to do, because vhost_vq_work_queue will only return failure for us when the vhost_task is killed and so userspace will not be able to handle events if we sent them. Signed-off-by: Mike Christie <michael.christie@oracle.com> Message-Id: <20240316004707.45557-2-michael.christie@oracle.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-04-22net: extend ubuf_info callback to ops structurePavel Begunkov
We'll need to associate additional callbacks with ubuf_info, introduce a structure holding ubuf_info callbacks. Apart from a more smarter io_uring notification management introduced in next patches, it can be used to generalise msg_zerocopy_put_abort() and also store ->sg_from_iter, which is currently passed in struct msghdr. Reviewed-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/all/a62015541de49c0e2a8a0377a1d5d0a5aeb07016.1713369317.git.asml.silence@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>