summaryrefslogtreecommitdiff
path: root/tools/docs/parse-headers.py
diff options
context:
space:
mode:
authorAlex Mastro <amastro@fb.com>2025-10-28 09:15:02 -0700
committerAlex Williamson <alex@shazbot.org>2025-10-28 15:54:41 -0600
commitef270ec44637d464126bd4ade483c4a1887e06bc (patch)
tree4d953a1c92a83ffe9cdb14a5ff1b27d0c7459e1d /tools/docs/parse-headers.py
parent1196f1f897d4ee64d8844e8cfa97c8f93e4d158c (diff)
vfio/type1: handle DMA map/unmap up to the addressable limit
Before this commit, it was possible to create end of address space mappings, but unmapping them via VFIO_IOMMU_UNMAP_DMA, replaying them for newly added iommu domains, and querying their dirty pages via VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP was broken due to bugs caused by comparisons against (iova + size) expressions, which overflow to zero. Additionally, there appears to be a page pinning leak in the vfio_iommu_type1_release() path, since vfio_unmap_unpin()'s loop body where unmap_unpin_*() are called will never be entered due to overflow of (iova + size) to zero. This commit handles DMA map/unmap operations up to the addressable limit by comparing against inclusive end-of-range limits, and changing iteration to perform relative traversals across range sizes, rather than absolute traversals across addresses. vfio_link_dma() inserts a zero-sized vfio_dma into the rb-tree, and is only used for that purpose, so discard the size from consideration for the insertion point. Tested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Fixes: 73fa0d10d077 ("vfio: Type1 IOMMU implementation") Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Signed-off-by: Alex Mastro <amastro@fb.com> Link: https://lore.kernel.org/r/20251028-fix-unmap-v6-3-2542b96bcc8e@fb.com Signed-off-by: Alex Williamson <alex@shazbot.org>
Diffstat (limited to 'tools/docs/parse-headers.py')
0 files changed, 0 insertions, 0 deletions