summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)Author
2023-10-10iommufd/selftest: Add domain_alloc_user() support in iommu mockYi Liu
Add mock_domain_alloc_user() and a new test case for IOMMU_HWPT_ALLOC_NEST_PARENT. Link: https://lore.kernel.org/r/20230928071528.26258-6-yi.l.liu@intel.com Co-developed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-10-10iommufd: Support allocating nested parent domainYi Liu
Extend IOMMU_HWPT_ALLOC to allocate domains to be used as parent (stage-2) in nested translation. Add IOMMU_HWPT_ALLOC_NEST_PARENT to the uAPI. Link: https://lore.kernel.org/r/20230928071528.26258-5-yi.l.liu@intel.com Signed-off-by: Yi Liu <yi.l.liu@intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-10-10iommufd: Flow user flags for domain allocation to domain_alloc_user()Yi Liu
Extends iommufd_hw_pagetable_alloc() to accept user flags, the uAPI will provide the flags. Link: https://lore.kernel.org/r/20230928071528.26258-4-yi.l.liu@intel.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-10-10iommufd: Use the domain_alloc_user() op for domain allocationYi Liu
Make IOMMUFD use iommu_domain_alloc_user() by default for iommu_domain creation. IOMMUFD needs to support iommu_domain allocation with parameters from userspace in nested support, and a driver is expected to implement everything under this op. If the iommu driver doesn't provide domain_alloc_user callback then IOMMUFD falls back to use iommu_domain_alloc() with an UNMANAGED type if possible. Link: https://lore.kernel.org/r/20230928071528.26258-3-yi.l.liu@intel.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Co-developed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Yi Liu <yi.l.liu@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-10-06Revert "iommu: Fix false ownership failure on AMD systems with PASID activated"Vasant Hegde
This reverts commit 2380f1e8195ef612deea1dc7a3d611c5d2b9b56a. Previous patch removed AMD iommu_v2 module. Hence its safe to revert this workaround. Suggested-by: Jason Gunthorpe <jgg@ziepe.ca> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Takashi Iwai <tiwai@suse.de> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Joerg Roedel <jroedel@suse.de> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20231006095706.5694-6-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-06iommu/amd: Remove unused EXPORT_SYMBOLSVasant Hegde
Drop EXPORT_SYMBOLS for the functions that are not used by any modules. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Tested-by: Alex Deucher <alexander.deucher@amd.com> Link: https://lore.kernel.org/r/20231006095706.5694-5-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-06iommu/amd: Remove amd_iommu_device_info()Vasant Hegde
No one is using this function. Hence remove it. Also move PCI device feature detection flags to amd_iommu_types.h as its only used inside AMD IOMMU driver. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Tested-by: Alex Deucher <alexander.deucher@amd.com> Link: https://lore.kernel.org/r/20231006095706.5694-4-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-06iommu/amd: Remove PPR supportVasant Hegde
Remove PPR handler and notifier related functions as its not used anymore. Note that we are retaining PPR interrupt handler support as it will be re-used when we introduce IOPF support. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Tested-by: Alex Deucher <alexander.deucher@amd.com> Link: https://lore.kernel.org/r/20231006095706.5694-3-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-06iommu/amd: Remove iommu_v2 moduleVasant Hegde
AMD GPU driver which was the only in-kernel user of iommu_v2 module removed dependency on iommu_v2 module. Also we are working on adding SVA support in AMD IOMMU driver. Device drivers are expected to use common SVA framework to enable device PASID/PRI features. Removing iommu_v2 module and then adding SVA simplifies the development. Hence remove iommu_v2 module. Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Tested-by: Alex Deucher <alexander.deucher@amd.com> Link: https://lore.kernel.org/r/20231006095706.5694-2-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-05iommu: Fix return code in iommu_group_alloc_default_domain()Jason Gunthorpe
This function returns NULL on errors, not ERR_PTR. Fixes: 1c68cbc64fe6 ("iommu: Add IOMMU_DOMAIN_PLATFORM") Reported-by: Dan Carpenter <dan.carpenter@linaro.org> Link: https://lore.kernel.org/r/8fb75157-6c81-4a9c-9992-d73d49902fa8@moroto.mountain Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/0-v2-ee2bae9af0f2+96-iommu_ga_err_ptr_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-05iommu: Do not use IOMMU_DOMAIN_DMA if CONFIG_IOMMU_DMA is not enabledJason Gunthorpe
msm_iommu platforms do not select either CONFIG_IOMMU_DMA or CONFIG_ARM_DMA_USE_IOMMU so they create a IOMMU_DOMAIN_DMA domain by default and never populate it. This acts like a BLOCKED domain and breaks the GPU driver on the platform. Detect this and force use of IDENTITY instead. Fixes: 98ac73f99bc4 ("iommu: Require a default_domain for all iommu drivers") Reported-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/linux-iommu/CAA8EJprz7VVmBG68U9zLuqPd0UdSRHYoLDJSP6tCj6H6qanuTQ@mail.gmail.com/ Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Tested-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/r/0-v1-20700abdf239+19c-iommu_no_dma_iommu_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-02iommu/dma: Use a large flush queue and timeout for shadow_on_flushNiklas Schnelle
Flush queues currently use a fixed compile time size of 256 entries. This being a power of 2 allows the compiler to use shift and mask instead of more expensive modulo operations. With per-CPU flush queues larger queue sizes would hit per-CPU allocation limits, with a single flush queue these limits do not apply however. Also with single queues being particularly suitable for virtualized environments with expensive IOTLB flushes these benefit especially from larger queues and thus fewer flushes. To this end re-order struct iova_fq so we can use a dynamic array and introduce the flush queue size and timeouts as new options in the iommu_dma_options struct. So as not to lose the shift and mask optimization, use a power of 2 for the length and use explicit shift and mask instead of letting the compiler optimize this. A large queue size and 1 second timeout is then set for the shadow on flush case set by s390 paged memory guests. This then brings performance on par with the previous s390 specific DMA API implementation. Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> #s390 Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-6-9e5fc4dacc36@linux.ibm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-02iommu/dma: Allow a single FQ in addition to per-CPU FQsNiklas Schnelle
In some virtualized environments, including s390 paged memory guests, IOTLB flushes are used to update IOMMU shadow tables. Due to this, they are much more expensive than in typical bare metal environments or non-paged s390 guests. In addition they may parallelize poorly in virtualized environments. This changes the trade off for flushing IOVAs such that minimizing the number of IOTLB flushes trumps any benefit of cheaper queuing operations or increased paralellism. In this scenario per-CPU flush queues pose several problems. Firstly per-CPU memory is often quite limited prohibiting larger queues. Secondly collecting IOVAs per-CPU but flushing via a global timeout reduces the number of IOVAs flushed for each timeout especially on s390 where PCI interrupts may not be bound to a specific CPU. Let's introduce a single flush queue mode that reuses the same queue logic but only allocates a single global queue. This mode is selected by dma-iommu if a newly introduced .shadow_on_flush flag is set in struct dev_iommu. As a first user the s390 IOMMU driver sets this flag during probe_device. With the unchanged small FQ size and timeouts this setting is worse than per-CPU queues but a follow up patch will make the FQ size and timeout variable. Together this allows the common IOVA flushing code to more closely resemble the global flush behavior used on s390's previous internal DMA API implementation. Link: https://lore.kernel.org/all/9a466109-01c5-96b0-bf03-304123f435ee@arm.com/ Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> #s390 Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-5-9e5fc4dacc36@linux.ibm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-02iommu/s390: Disable deferred flush for ISM devicesNiklas Schnelle
ISM devices are virtual PCI devices used for cross-LPAR communication. Unlike real PCI devices ISM devices do not use the hardware IOMMU but inspects IOMMU translation tables directly on IOTLB flush (s390 RPCIT instruction). ISM devices keep their DMA allocations static and only very rarely DMA unmap at all. For each IOTLB flush that occurs after unmap the ISM devices will however inspect the area of the IOVA space indicated by the flush. This means that for the global IOTLB flushes used by the flush queue mechanism the entire IOVA space would be inspected. In principle this would be fine, albeit potentially unnecessarily slow, it turns out however that ISM devices are sensitive to seeing IOVA addresses that are currently in use in the IOVA range being flushed. Seeing such in-use IOVA addresses will cause the ISM device to enter an error state and become unusable. Fix this by claiming IOMMU_CAP_DEFERRED_FLUSH only for non-ISM devices. This makes sure IOTLB flushes only cover IOVAs that have been unmapped and also restricts the range of the IOTLB flush potentially reducing latency spikes. Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-4-9e5fc4dacc36@linux.ibm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-02s390/pci: Use dma-iommu layerNiklas Schnelle
While s390 already has a standard IOMMU driver and previous changes have added I/O TLB flushing operations this driver is currently only used for user-space PCI access such as vfio-pci. For the DMA API s390 instead utilizes its own implementation in arch/s390/pci/pci_dma.c which drives the same hardware and shares some code but requires a complex and fragile hand over between DMA API and IOMMU API use of a device and despite code sharing still leads to significant duplication and maintenance effort. Let's utilize the common code DMAP API implementation from drivers/iommu/dma-iommu.c instead allowing us to get rid of arch/s390/pci/pci_dma.c. Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-3-9e5fc4dacc36@linux.ibm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-10-02iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM returnNiklas Schnelle
On s390 when using a paging hypervisor, .iotlb_sync_map is used to sync mappings by letting the hypervisor inspect the synced IOVA range and updating a shadow table. This however means that .iotlb_sync_map can fail as the hypervisor may run out of resources while doing the sync. This can be due to the hypervisor being unable to pin guest pages, due to a limit on mapped addresses such as vfio_iommu_type1.dma_entry_limit or lack of other resources. Either way such a failure to sync a mapping should result in a DMA_MAPPING_ERROR. Now especially when running with batched IOTLB flushes for unmap it may be that some IOVAs have already been invalidated but not yet synced via .iotlb_sync_map. Thus if the hypervisor indicates running out of resources, first do a global flush allowing the hypervisor to free resources associated with these mappings as well a retry creating the new mappings and only if that also fails report this error to callers. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Acked-by: Jernej Skrabec <jernej.skrabec@gmail.com> # sun50i Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Link: https://lore.kernel.org/r/20230928-dma_iommu-v13-1-9e5fc4dacc36@linux.ibm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/vt-d: Avoid memory allocation in iommu_suspend()Zhang Rui
The iommu_suspend() syscore suspend callback is invoked with IRQ disabled. Allocating memory with the GFP_KERNEL flag may re-enable IRQs during the suspend callback, which can cause intermittent suspend/hibernation problems with the following kernel traces: Calling iommu_suspend+0x0/0x1d0 ------------[ cut here ]------------ WARNING: CPU: 0 PID: 15 at kernel/time/timekeeping.c:868 ktime_get+0x9b/0xb0 ... CPU: 0 PID: 15 Comm: rcu_preempt Tainted: G U E 6.3-intel #r1 RIP: 0010:ktime_get+0x9b/0xb0 ... Call Trace: <IRQ> tick_sched_timer+0x22/0x90 ? __pfx_tick_sched_timer+0x10/0x10 __hrtimer_run_queues+0x111/0x2b0 hrtimer_interrupt+0xfa/0x230 __sysvec_apic_timer_interrupt+0x63/0x140 sysvec_apic_timer_interrupt+0x7b/0xa0 </IRQ> <TASK> asm_sysvec_apic_timer_interrupt+0x1f/0x30 ... ------------[ cut here ]------------ Interrupts enabled after iommu_suspend+0x0/0x1d0 WARNING: CPU: 0 PID: 27420 at drivers/base/syscore.c:68 syscore_suspend+0x147/0x270 CPU: 0 PID: 27420 Comm: rtcwake Tainted: G U W E 6.3-intel #r1 RIP: 0010:syscore_suspend+0x147/0x270 ... Call Trace: <TASK> hibernation_snapshot+0x25b/0x670 hibernate+0xcd/0x390 state_store+0xcf/0xe0 kobj_attr_store+0x13/0x30 sysfs_kf_write+0x3f/0x50 kernfs_fop_write_iter+0x128/0x200 vfs_write+0x1fd/0x3c0 ksys_write+0x6f/0xf0 __x64_sys_write+0x1d/0x30 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x72/0xdc Given that only 4 words memory is needed, avoid the memory allocation in iommu_suspend(). CC: stable@kernel.org Fixes: 33e07157105e ("iommu/vt-d: Avoid GFP_ATOMIC where it is not needed") Signed-off-by: Zhang Rui <rui.zhang@intel.com> Tested-by: Ooi, Chin Hao <chin.hao.ooi@intel.com> Link: https://lore.kernel.org/r/20230921093956.234692-1-rui.zhang@intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20230925120417.55977-2-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/tegra-smmu: Drop unnecessary error check for for debugfs_create_dir()Jinjie Ruan
The debugfs_create_dir() function returns error pointers. It never returns NULL. As Baolu suggested, this patch removes the error checking for debugfs_create_dir in tegra-smmu.c. This is because the DebugFS kernel API is developed in a way that the caller can safely ignore the errors that occur during the creation of DebugFS nodes. The debugfs APIs have a IS_ERR() judge in start_creating() which can handle it gracefully. So these checks are unnecessary. Fixes: d1313e7896e9 ("iommu/tegra-smmu: Add debugfs support") Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Suggested-by: Baolu Lu <baolu.lu@linux.intel.com> Acked-by: Thierry Reding <treding@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20230901073056.1364755-1-ruanjinjie@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu: Remove duplicate includeJiapeng Chong
./drivers/iommu/iommu.c: iommu-priv.h is included more than once. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=6186 Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Link: https://lore.kernel.org/r/20230818092620.91748-1-jiapeng.chong@linux.alibaba.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Initialize iommu_device->max_pasidsVasant Hegde
Commit 1adf3cc20d69 ("iommu: Add max_pasids field in struct iommu_device") introduced a variable struct iommu_device.max_pasids to track max PASIDS supported by each IOMMU. Let us initialize this field for AMD IOMMU. IOMMU core will use this value to set max PASIDs per device (see __iommu_probe_device()). Also remove unused global 'amd_iommu_max_pasid' variable. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-15-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Enable device ATS/PASID/PRI capabilities independentlyVasant Hegde
Introduce helper functions to enable/disable device ATS/PASID/PRI capabilities independently along with the new pasid_enabled and pri_enabled variables in struct iommu_dev_data to keep track, which allows attach_device() and detach_device() to be simplified. Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-14-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Introduce iommu_dev_data.flags to track device capabilitiesVasant Hegde
Currently we use struct iommu_dev_data.iommu_v2 to keep track of the device ATS, PRI, and PASID capabilities. But these capabilities can be enabled independently (except PRI requires ATS support). Hence, replace the iommu_v2 variable with a flags variable, which keep track of the device capabilities. From commit 9bf49e36d718 ("PCI/ATS: Handle sharing of PF PRI Capability with all VFs"), device PRI/PASID is shared between PF and any associated VFs. Hence use pci_pri_supported() and pci_pasid_features() instead of pci_find_ext_capability() to check device PRI/PASID support. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-13-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Introduce iommu_dev_data.pprSuravee Suthikulpanit
For AMD IOMMU, the PPR feature is needed to support IO page fault (IOPF). PPR is enabled per PCI end-point device, and is configured by the PPR bit in the IOMMU device table entry (i.e DTE[PPR]). Introducing struct iommu_dev_data.ppr track PPR setting for each device. Also iommu_dev_data.ppr will be set only when IOMMU supports PPR. Hence remove redundant feature support check in set_dte_entry(). Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-12-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Rename ats related variablesVasant Hegde
Remove nested structure and make it as 'ats_{enable/qdep}'. Also convert 'dev_data.pri_tlp' to bit field. No functional changes intended. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-11-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Modify logic for checking GT and PPR featuresSuravee Suthikulpanit
In order to support v2 page table, IOMMU driver need to check if the hardware can support Guest Translation (GT) and Peripheral Page Request (PPR) features. Currently, IOMMU driver uses global (amd_iommu_v2_present) and per-iommu (struct amd_iommu.is_iommu_v2) variables to track the features. There variables area redundant since we could simply just check the global EFR mask. Therefore, replace it with a helper function with appropriate name. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-10-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Consolidate feature detection and reporting logicSuravee Suthikulpanit
Currently, IOMMU driver assumes capabilities on all IOMMU instances to be homogeneous. During early_amd_iommu_init(), the driver probes all IVHD blocks and do sanity check to make sure that only features common among all IOMMU instances are supported. This is tracked in the global amd_iommu_efr and amd_iommu_efr2, which should be used whenever the driver need to check hardware capabilities. Therefore, introduce check_feature() and check_feature2(), and modify the driver to adopt the new helper functions. In addition, clean up the print_iommu_info() to avoid reporting redundant EFR/EFR2 for each IOMMU instance. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-9-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Miscellaneous clean up when free domainSuravee Suthikulpanit
* Use the protection_domain_free() helper function to free domain. The function has been modified to also free memory used for the v1 and v2 page tables. Also clear gcr3 table in v2 page table free path. * Refactor code into cleanup_domain() for reusability. Change BUG_ON to WARN_ON in cleanup path. * Protection domain dev_cnt should be read when the domain is locked. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-8-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Do not set amd_iommu_pgtable in pass-through modeVasant Hegde
Since AMD IOMMU page table is not used in passthrough mode, switching to v1 page table is not required. Therefore, remove redundant amd_iommu_pgtable update and misleading warning message. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-7-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Introduce helper functions for managing GCR3 tableSuravee Suthikulpanit
Refactor domain_enable_v2() into helper functions for managing GCR3 table (i.e. setup_gcr3_table() and get_gcr3_levels()), which will be used in subsequent patches. Also re-arrange code and remove forward declaration. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Co-developed-by: Vasant Hegde <vasant.hegde@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-6-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Refactor protection domain allocation codeVasant Hegde
To replace if-else with switch-case statement due to increasing number of domain types. No functional changes intended. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-5-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Consolidate logic to allocate protection domainSuravee Suthikulpanit
Move the logic into the common caller function to simplify the code. No functional changes intended. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-4-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Consolidate timeout pre-define to amd_iommu_type.hSuravee Suthikulpanit
To allow inclusion in other files in subsequent patches. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-3-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/amd: Remove unused amd_io_pgtable.pt_root variableSuravee Suthikulpanit
It has been no longer used since the commit 6eedb59c18a3 ("iommu/amd: Remove amd_iommu_domain_get_pgtable"). Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20230921092147.5930-2-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/iova: Manage the depot list sizeRobin Murphy
Automatically scaling the depot up to suit the peak capacity of a workload is all well and good, but it would be nice to have a way to scale it back down again if the workload changes. To that end, add backround reclaim that will gradually free surplus magazines if the depot size remains above a reasonable threshold for long enough. Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/03170665c56d89c6ce6081246b47f68d4e483308.1694535580.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/iova: Make the rcache depot scale betterRobin Murphy
The algorithm in the original paper specifies the storage of full magazines in the depot as an unbounded list rather than a fixed-size array. It turns out to be pretty straightforward to do this in our implementation with no significant loss of efficiency. This allows the depot to scale up to the working set sizes of larger systems, while also potentially saving some memory on smaller ones too. Since this involves touching struct iova_magazine with the requisite care, we may as well reinforce the comment with a proper assertion too. Reviewed-by: John Garry <john.g.garry@oracle.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/f597aa72fc3e1d315bc4574af0ce0ebe5c31cd22.1694535580.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu: Improve map/unmap sanity checksRobin Murphy
The current checks for the __IOMMU_DOMAIN_PAGING capability seem a bit stifled, since it is quite likely now that a non-paging domain won't have a pgsize_bitmap and/or mapping ops, and thus get caught by the earlier condition anyway. Swap them around to test the more fundamental condition first, then we can reasonably also upgrade the other to a WARN_ON, since if a driver does ever expose a paging domain without the means to actually page, it's clearly very broken. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/524db1ec0139c964d26928a6a264945aa66d010c.1694525662.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu: Retire map/unmap opsRobin Murphy
With everyone now implementing the new interfaces, clean up the last remnants of the old map/unmap ops and simplify the calling logic again. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/d2afdf13b2fbf537713c3ec642dfd49d16dd9e6a.1694525662.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/tegra-smmu: Update to {map,unmap}_pagesRobin Murphy
Trivially update map/unmap to the new interface, which is quite happy for drivers to still process just one page per call. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/338c520ed947d6d5b9d0509ccb4588908bd9ce1e.1694525662.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/sun50i: Update to {map,unmap}_pagesRobin Murphy
Trivially update map/unmap to the new interface, which is quite happy for drivers to still process just one page per call. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/395995e5097803f9a65f2fb79e0732d41c2b8a84.1694525662.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/rockchip: Update to {map,unmap}_pagesRobin Murphy
Trivially update map/unmap to the new interface, which is quite happy for drivers to still process just one page per call. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/ccc21bf7d1d0da8989d4d517a13d0846d6b71a38.1694525662.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/omap: Update to {map,unmap}_pagesRobin Murphy
Trivially update map/unmap to the new interface, which is quite happy for drivers to still process just one page per call. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/7bad94ffccd4cba32bded72e0860974012881e24.1694525662.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/exynos: Update to {map,unmap}_pagesRobin Murphy
Trivially update map/unmap to the new interface, which is quite happy for drivers to still process just one page per call. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/579176033e92d49ec9fc9f3d33d7b9d4c474f0b4.1694525662.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/omap: Convert to generic_single_device_group()Jason Gunthorpe
Use the new helper. For some reason omap will probe its driver even if it doesn't load an iommu driver. Keep this working by keeping a bool to track if the iommu driver was started. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/7-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/ipmmu-vmsa: Convert to generic_single_device_group()Jason Gunthorpe
Use the new helper. This driver is kind of weird since in ARM mode it pretends it has per-device groups, but ARM64 mode does not. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/6-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/rockchip: Convert to generic_single_device_group()Jason Gunthorpe
Use the new helper. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/5-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/sprd: Convert to generic_single_device_group()Jason Gunthorpe
Use the new helper. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/4-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu/sun50i: Convert to generic_single_device_group()Jason Gunthorpe
Use the new helper. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Jernej Skrabec <jernej.skrabec@gmail.com> Link: https://lore.kernel.org/r/3-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu: Add generic_single_device_group()Jason Gunthorpe
This implements the common pattern seen in drivers of a single iommu_group for the entire iommu driver instance. Implement this in core code so the drivers that want this can select it from their ops. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/2-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu: Remove useless group refcountingJason Gunthorpe
Several functions obtain the group reference and then release it before returning. This gives the impression that the refcount is protecting something for the duration of the function. In truth all of these functions are called in places that know a device driver is probed to the device and our locking rules already require that dev->iommu_group cannot change while a driver is attached to the struct device. If this was not the case then this code is already at risk of triggering UAF as it is racy if the dev->iommu_group is concurrently going to NULL/free. refcount debugging will throw a WARN if kobject_get() is called on a 0 refcount object to highlight the bug. Remove the confusing refcounting and leave behind a comment about the restriction. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v1-c869a95191f2+5e8-iommu_single_grp_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-09-25iommu: Convert remaining simple drivers to domain_alloc_paging()Jason Gunthorpe
These drivers don't support IOMMU_DOMAIN_DMA, so this commit effectively allows them to support that mode. The prior work to require default_domains makes this safe because every one of these drivers is either compilation incompatible with dma-iommu.c, or already establishing a default_domain. In both cases alloc_domain() will never be called with IOMMU_DOMAIN_DMA for these drivers so it is safe to drop the test. Removing these tests clarifies that the domain allocation path is only about the functionality of a paging domain and has nothing to do with policy of how the paging domain is used for UNMANAGED/DMA/DMA_FQ. Tested-by: Niklas Schnelle <schnelle@linux.ibm.com> Tested-by: Steven Price <steven.price@arm.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/24-v8-81230027b2fa+9d-iommu_all_defdom_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>