summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)Author
2023-12-13iommu/arm-smmu: Convert to a global static identity domainJason Gunthorpe
Create a global static identity domain with it's own arm_smmu_attach_dev_identity() that simply calls arm_smmu_master_install_s2crs() with the identity parameters. This is done by giving the attach path for identity its own unique implementation that simply calls arm_smmu_master_install_s2crs(). Remove ARM_SMMU_DOMAIN_BYPASS and all checks of IOMMU_DOMAIN_IDENTITY. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/2-v2-c86cc8c2230e+160bb-smmu_newapi_jgg@nvidia.com [will: Move duplicated autosuspend logic into a helper function] Signed-off-by: Will Deacon <will@kernel.org>
2023-12-13iommu/arm-smmu: Reorganize arm_smmu_domain_add_master()Jason Gunthorpe
Make arm_smmu_domain_add_master() not use the smmu_domain to detect the s2cr configuration, instead pass it in as a parameter. It always returns zero so make it return void. Since it no longer really does anything to do with a domain call it arm_smmu_master_install_s2crs(). This is done to make the next two patches able to re-use this code without forcing the creation of a struct arm_smmu_domain. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v2-c86cc8c2230e+160bb-smmu_newapi_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2023-12-13iommu/arm-smmu-v3: Remove ARM_SMMU_DOMAIN_NESTEDJason Gunthorpe
Currently this is exactly the same as ARM_SMMU_DOMAIN_S2, so just remove it. The ongoing work to add nesting support through iommufd will do something a little different. Reviewed-by: Moritz Fischer <mdf@kernel.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Will Deacon <will@kernel.org>
2023-12-13iommu/arm-smmu-v3: Master cannot be NULL in arm_smmu_write_strtab_ent()Jason Gunthorpe
The only caller is arm_smmu_install_ste_for_dev() which never has a NULL master. Remove the confusing if. Reviewed-by: Moritz Fischer <mdf@kernel.org> Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Will Deacon <will@kernel.org>
2023-12-13iommu/arm-smmu-v3: Add a type for the STEJason Gunthorpe
Instead of passing a naked __le16 * around to represent a STE wrap it in a "struct arm_smmu_ste" with an array of the correct size. This makes it much clearer which functions will comprise the "STE API". Reviewed-by: Moritz Fischer <mdf@kernel.org> Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Will Deacon <will@kernel.org>
2023-12-12iommu/arm-smmu-v3: disable stall for quiet_cdWenkai Lin
In the stall model, invalid transactions were expected to be stalled and aborted by the IOPF handler. However, when killing a test case with a huge amount of data, the accelerator streamline can not stop until all data is consumed even if the page fault handler reports errors. As a result, the kill may take a long time, about 10 seconds with numerous iopf interrupts. So disable stall for quiet_cd in the non-force stall model, since force stall model (STALL_MODEL==0b10) requires CD.S must be 1. Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: Wenkai Lin <linwenkai6@hisilicon.com> Suggested-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20231206005727.46150-1-zhangfei.gao@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2023-12-12iommu/qcom: restore IOMMU state if neededVladimir Lypak
If the IOMMU has a power domain then some state will be lost in qcom_iommu_suspend and TZ will reset device if we don't call qcom_scm_restore_sec_cfg before accessing it again. Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com> [luca@z3ntu.xyz: reword commit message a bit] Signed-off-by: Luca Weiss <luca@z3ntu.xyz> Link: https://lore.kernel.org/r/20231011-msm8953-iommu-restore-v1-1-48a0c93809a2@z3ntu.xyz Signed-off-by: Will Deacon <will@kernel.org>
2023-12-12iommu/arm-smmu-qcom: Add QCM2290 MDSS compatibleKonrad Dybcio
Add the QCM2290 MDSS compatible to clients compatible list, as it also needs the workarounds. Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20231125-topic-rb1_feat-v3-5-4cbb567743bb@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2023-12-12iommu/arm-smmu-qcom: Add missing GMU entry to match tableRob Clark
In some cases the firmware expects cbndx 1 to be assigned to the GMU, so we also want the default domain for the GMU to be an identy domain. This way it does not get a context bank assigned. Without this, both of_dma_configure() and drm/msm's iommu_domain_attach() will trigger allocating and configuring a context bank. So GMU ends up attached to both cbndx 1 and later cbndx 2. This arrangement seemingly confounds and surprises the firmware if the GPU later triggers a translation fault, resulting (on sc8280xp / lenovo x13s, at least) in the SMMU getting wedged and the GPU stuck without memory access. Cc: stable@vger.kernel.org Signed-off-by: Rob Clark <robdclark@chromium.org> Tested-by: Johan Hovold <johan+linaro@kernel.org> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20231210180655.75542-1-robdclark@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2023-12-12iommu: Mark dev_iommu_priv_set() with a lockdepJason Gunthorpe
A perfect driver would only call dev_iommu_priv_set() from its probe callback. We've made it functionally correct to call it from the of_xlate by adding a lock around that call. lockdep assert that iommu_probe_device_lock is held to discourage misuse. Exclude PPC kernels with CONFIG_FSL_PAMU turned on because FSL_PAMU uses a global static for its priv and abuses priv for its domain. Remove the pointless stores of NULL, all these are on paths where the core code will free dev->iommu after the op returns. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Tested-by: Hector Martin <marcan@marcan.st> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/5-v2-16e4def25ebb+820-iommu_fwspec_p1_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-12iommu: Mark dev_iommu_get() with lockdepJason Gunthorpe
Allocation of dev->iommu must be done under the iommu_probe_device_lock. Mark this with lockdep to discourage future mistakes. Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Tested-by: Hector Martin <marcan@marcan.st> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Moritz Fischer <moritzf@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/4-v2-16e4def25ebb+820-iommu_fwspec_p1_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-12iommu/of: Use -ENODEV consistently in of_iommu_configure()Jason Gunthorpe
Instead of returning 1 and trying to handle positive error codes just stick to the convention of returning -ENODEV. Remove references to ops from of_iommu_configure(), a NULL ops will already generate an error code. There is no reason to check dev->bus, if err=0 at this point then the called configure functions thought there was an iommu and we should try to probe it. Remove it. Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Moritz Fischer <moritzf@google.com> Tested-by: Hector Martin <marcan@marcan.st> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/3-v2-16e4def25ebb+820-iommu_fwspec_p1_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-12iommmu/of: Do not return struct iommu_ops from of_iommu_configure()Jason Gunthorpe
Nothing needs this pointer. Return a normal error code with the usual IOMMU semantic that ENODEV means 'there is no IOMMU driver'. Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Rob Herring <robh@kernel.org> Tested-by: Hector Martin <marcan@marcan.st> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/2-v2-16e4def25ebb+820-iommu_fwspec_p1_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-12iommu: Set owner token to SVA domainLu Baolu
Commit a9c362db3920 ("iommu: Validate that devices match domains") added an owner token to the iommu_domain. This token is checked during domain attachment to RID or PASID through the generic iommu interfaces. The SVA domains are attached to PASIDs through those iommu interfaces. Therefore, they require the owner token to be set during allocation. Otherwise, they fail to attach. Set the owner token for SVA domains. Fixes: a9c362db3920 ("iommu: Validate that devices match domains") Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231208015314.320663-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-12iommu: Support mm PASID 1:n with sva domainsTina Zhang
Each mm bound to devices gets a PASID and corresponding sva domains allocated in iommu_sva_bind_device(), which are referenced by iommu_mm field of the mm. The PASID is released in __mmdrop(), while a sva domain is released when no one is using it (the reference count is decremented in iommu_sva_unbind_device()). However, although sva domains and their PASID are separate objects such that their own life cycles could be handled independently, an enqcmd use case may require releasing the PASID in releasing the mm (i.e., once a PASID is allocated for a mm, it will be permanently used by the mm and won't be released until the end of mm) and only allows to drop the PASID after the sva domains are released. To this end, mmgrab() is called in iommu_sva_domain_alloc() to increment the mm reference count and mmdrop() is invoked in iommu_domain_free() to decrement the mm reference count. Since the required info of PASID and sva domains is kept in struct iommu_mm_data of a mm, use mm->iommu_mm field instead of the old pasid field in mm struct. The sva domain list is protected by iommu_sva_lock. Besides, this patch removes mm_pasid_init(), as with the introduced iommu_mm structure, initializing mm pasid in mm_init() is unnecessary. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Tina Zhang <tina.zhang@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231027000525.1278806-6-tina.zhang@intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-12iommu: Add mm_get_enqcmd_pasid() helper functionTina Zhang
mm_get_enqcmd_pasid() should be used by architecture code and closely related to learn the PASID value that the x86 ENQCMD operation should use for the mm. For the moment SMMUv3 uses this without any connection to ENQCMD, it will be cleaned up similar to how the prior patch made VT-d use the PASID argument of set_dev_pasid(). The motivation is to replace mm->pasid with an iommu private data structure that is introduced in a later patch. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Tina Zhang <tina.zhang@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231027000525.1278806-4-tina.zhang@intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-12iommu/vt-d: Remove mm->pasid in intel_sva_bind_mm()Tina Zhang
The pasid is passed in as a parameter through .set_dev_pasid() callback. Thus, intel_sva_bind_mm() can directly use it instead of retrieving the pasid value from mm->pasid. Suggested-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Tina Zhang <tina.zhang@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231027000525.1278806-3-tina.zhang@intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-12iommu: Change kconfig around IOMMU_SVAJason Gunthorpe
Linus suggested that the kconfig here is confusing: https://lore.kernel.org/all/CAHk-=wgUiAtiszwseM1p2fCJ+sC4XWQ+YN4TanFhUgvUqjr9Xw@mail.gmail.com/ Let's break it into three kconfigs controlling distinct things: - CONFIG_IOMMU_MM_DATA controls if the mm_struct has the additional fields for the IOMMU. Currently only PASID, but later patches store a struct iommu_mm_data * - CONFIG_ARCH_HAS_CPU_PASID controls if the arch needs the scheduling bit for keeping track of the ENQCMD instruction. x86 will select this if IOMMU_SVA is enabled - IOMMU_SVA controls if the IOMMU core compiles in the SVA support code for iommu driver use and the IOMMU exported API This way ARM will not enable CONFIG_ARCH_HAS_CPU_PASID Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231027000525.1278806-2-tina.zhang@intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/apple-dart: Fix spelling mistake "grups" -> "groups"Colin Ian King
There is a spelling mistake in a dev_err message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Link: https://lore.kernel.org/r/20231209231240.4056082-1-colin.i.king@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd/pgtbl_v2: Invalidate updated page ranges onlyVasant Hegde
Enhance __domain_flush_pages() to detect domain page table mode and use that info to build invalidation commands. So that we can use amd_iommu_domain_flush_pages() to invalidate v2 page table. Also pass PASID, gn variable to device_flush_iotlb() so that it can build IOTLB invalidation command for both v1 and v2 page table. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-10-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Make domain_flush_pages as global functionVasant Hegde
- Rename domain_flush_pages() -> amd_iommu_domain_flush_pages() and make it as global function. - Rename amd_iommu_domain_flush_tlb_pde() -> amd_iommu_domain_flush_all() and make it as static. - Convert v1 page table (io_pgtble.c) to use amd_iommu_domain_flush_pages(). Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-9-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Consolidate amd_iommu_domain_flush_complete() callVasant Hegde
Call amd_iommu_domain_flush_complete() from domain_flush_pages(). That way we can remove explicit call of amd_iommu_domain_flush_complete() from various places. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-8-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Refactor device iotlb invalidation codeVasant Hegde
build_inv_iotlb_pages() and build_inv_iotlb_pasid() pretty much duplicates the code. Enhance build_inv_iotlb_pages() to invalidate guest IOTLB as well. And remove build_inv_iotlb_pasid() function. Suggested-by: Kishon Vijay Abraham I <kvijayab@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-7-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Refactor IOMMU tlb invalidation codeVasant Hegde
build_inv_iommu_pages() and build_inv_iommu_pasid() pretty much duplicates the code. Hence enhance build_inv_iommu_pages() to invalidate guest pages as well. And remove build_inv_iommu_pasid(). Suggested-by: Kishon Vijay Abraham I <kvijayab@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-6-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Add support to invalidate multiple guest pagesVasant Hegde
Current interface supports invalidating single page or entire guest translation information for a single process address space. IOMMU CMD_INV_IOMMU_PAGES and CMD_INV_IOTLB_PAGES commands supports invalidating range of pages. Add support to invalidate multiple pages. This is preparatory patch before consolidating host and guest invalidation code into single function. Following patches will consolidation tlb invalidation code. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-5-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Remove redundant passing of PDE bitVasant Hegde
Current code always sets PDE bit in INVALIDATE_IOMMU_PAGES command. Hence get rid of 'pde' variable across functions. We can re-introduce this bit whenever its needed. Suggested-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-4-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Remove redundant domain flush from attach_device()Vasant Hegde
Domain flush was introduced in attach_device() path to handle kdump scenario. Later init code was enhanced to handle kdump scenario where it also takes care of flushing everything including TLB (see early_enable_iommus()). Hence remove redundant flush from attach_device() function. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-3-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Rename iommu_flush_all_caches() -> amd_iommu_flush_all_caches()Vasant Hegde
Rename function inline with driver naming convention. No functional changes. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20231122090215.6191-2-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-11iommu/amd: Do not flush IRTE when only updating isRun and destination fieldsSuravee Suthikulpanit
According to the recent update in the AMD IOMMU spec [1], the IsRun and Destination fields of the Interrupt Remapping Table Entry (IRTE) are not cached by the IOMMU hardware. Therefore, do not issue the INVALIDATE_INTERRUPT_TABLE command when updating IRTE[IsRun] and IRTE[Destination] when IRTE[GuestMode]=1, which should help improve IOMMU AVIC/x2AVIC performance. References: [1] AMD IOMMU Spec Revision (Rev 3.08-PUB) (Link: https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/specifications/48882_IOMMU.pdf) Cc: Joao Martins <joao.m.martins@oracle.com> Cc: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Tested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Link: https://lore.kernel.org/r/20231017144236.8287-1-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-12-05Merge tag 'for-linus-iommufd' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jgg/iommufd Pull iommufd fixes from Jason Gunthorpe: - A small fix for the dirty tracking self test to fail correctly if the code is buggy - Fix a tricky syzkaller race UAF with object reference counting * tag 'for-linus-iommufd' of git://git.kernel.org/pub/scm/linux/kernel/git/jgg/iommufd: iommufd: Do not UAF during iommufd_put_object() iommufd: Add iommufd_ctx to iommufd_put_object() iommufd/selftest: Fix _test_mock_dirty_bitmaps()
2023-12-01iommu: Fix printk arg in of_iommu_get_resv_regions()Daniel Mentz
The variable phys is defined as (struct resource *) which aligns with the printk format specifier %pr. Taking the address of it results in a value of type (struct resource **) which is incompatible with the format specifier %pr. Therefore, remove the address of operator (&). Fixes: a5bf3cfce8cb ("iommu: Implement of_iommu_get_resv_regions()") Signed-off-by: Daniel Mentz <danielmentz@google.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20231108062226.928985-1-danielmentz@google.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-29iommufd: Do not UAF during iommufd_put_object()Jason Gunthorpe
The mixture of kernel and user space lifecycle objects continues to be complicated inside iommufd. The obj->destroy_rwsem is used to bring order to the kernel driver destruction sequence but it cannot be sequenced right with the other refcounts so we end up possibly UAF'ing: BUG: KASAN: slab-use-after-free in __up_read+0x627/0x750 kernel/locking/rwsem.c:1342 Read of size 8 at addr ffff888073cde868 by task syz-executor934/6535 CPU: 1 PID: 6535 Comm: syz-executor934 Not tainted 6.6.0-rc7-syzkaller-00195-g2af9b20dbb39 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/09/2023 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:364 [inline] print_report+0xc4/0x620 mm/kasan/report.c:475 kasan_report+0xda/0x110 mm/kasan/report.c:588 __up_read+0x627/0x750 kernel/locking/rwsem.c:1342 iommufd_put_object drivers/iommu/iommufd/iommufd_private.h:149 [inline] iommufd_vfio_ioas+0x46c/0x580 drivers/iommu/iommufd/vfio_compat.c:146 iommufd_fops_ioctl+0x347/0x4d0 drivers/iommu/iommufd/main.c:398 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __x64_sys_ioctl+0x18f/0x210 fs/ioctl.c:857 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd There are two races here, the more obvious one: CPU 0 CPU 1 iommufd_put_object() iommufd_destroy() refcount_dec(&obj->users) iommufd_object_remove() kfree() up_read(&obj->destroy_rwsem) // Boom And there is also perhaps some possibility that the rwsem could hit an issue: CPU 0 CPU 1 iommufd_put_object() iommufd_object_destroy_user() refcount_dec(&obj->users); down_write(&obj->destroy_rwsem) up_read(&obj->destroy_rwsem); atomic_long_or(RWSEM_FLAG_WAITERS, &sem->count); tmp = atomic_long_add_return_release() rwsem_try_write_lock() iommufd_object_remove() up_write(&obj->destroy_rwsem) kfree() clear_nonspinnable() // Boom Fix this by reorganizing this again so that two refcounts are used to keep track of things with a rule that users == 0 && shortterm_users == 0 means no other threads have that memory. Put a wait_queue in the iommufd_ctx object that is triggered when any sub object reaches a 0 shortterm_users. This allows the same wait for userspace ioctls to finish behavior that the rwsem was providing. This is weaker still than the prior versions: - There is no bias on shortterm_users so if some thread is waiting to destroy other threads can continue to get new read sides - If destruction fails, eg because of an active in-kernel user, then shortterm_users will have cycled to zero momentarily blocking new users - If userspace races destroy with other userspace operations they continue to get an EBUSY since we still can't intermix looking up an ID and sleeping for its unref In all cases these are things that userspace brings on itself, correct programs will not hit them. Fixes: 99f98a7c0d69 ("iommufd: IOMMUFD_DESTROY should not increase the refcount") Link: https://lore.kernel.org/all/2-v2-ca9e00171c5b+123-iommufd_syz4_jgg@nvidia.com/ Reported-by: syzbot+d31adfb277377ef8fcba@syzkaller.appspotmail.com Closes: https://lore.kernel.org/r/00000000000055ef9a0609336580@google.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-11-29iommufd: Add iommufd_ctx to iommufd_put_object()Jason Gunthorpe
Will be used in the next patch. Link: https://lore.kernel.org/r/1-v2-ca9e00171c5b+123-iommufd_syz4_jgg@nvidia.com/ Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2023-11-27Merge branch 'iommu/fixes' into coreJoerg Roedel
2023-11-27iommu/apple-dart: Use readl instead of readl_relaxed for consistencySven Peter
While the readl_relaxed in apple_dart_suspend is correct the rest of the driver uses the non-relaxed variants everywhere and the single readl_relaxed is inconsistent and possibly confusing. Signed-off-by: Sven Peter <sven@svenpeter.dev> Acked-by: Hector Martin <marcan@marcan.st> Reviewed-by: Neal Gompa <neal@gompa.dev> Link: https://lore.kernel.org/r/20231126162009.17934-1-sven@svenpeter.dev Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/apple-dart: Add support for t8103 USB4 DARTSven Peter
This variant of the regular t8103 DART is used for the two USB4/Thunderbolt PCIe controllers. It supports 64 instead of 16 streams which requires a slightly different MMIO layout. Acked-by: Hector Martin <marcan@marcan.st> Signed-off-by: Sven Peter <sven@svenpeter.dev> Link: https://lore.kernel.org/r/20231126151701.16534-4-sven@svenpeter.dev Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/apple-dart: Write to all DART_T8020_STREAM_SELECTSven Peter
We're about to add support for a DART variant that use more than 16 streams and requires writing to two separate stream select registers when issuing TLB flushes. Acked-by: Hector Martin <marcan@marcan.st> Signed-off-by: Sven Peter <sven@svenpeter.dev> Link: https://lore.kernel.org/r/20231126151701.16534-3-sven@svenpeter.dev Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu: Extend LPAE page table format to support custom allocatorsBoris Brezillon
We need that in order to implement the VM_BIND ioctl in the GPU driver targeting new Mali GPUs. VM_BIND is about executing MMU map/unmap requests asynchronously, possibly after waiting for external dependencies encoded as dma_fences. We intend to use the drm_sched framework to automate the dependency tracking and VM job dequeuing logic, but this comes with its own set of constraints, one of them being the fact we are not allowed to allocate memory in the drm_gpu_scheduler_ops::run_job() to avoid this sort of deadlocks: - VM_BIND map job needs to allocate a page table to map some memory to the VM. No memory available, so kswapd is kicked - GPU driver shrinker backend ends up waiting on the fence attached to the VM map job or any other job fence depending on this VM operation. With custom allocators, we will be able to pre-reserve enough pages to guarantee the map/unmap operations we queued will take place without going through the system allocator. But we can also optimize allocation/reservation by not free-ing pages immediately, so any upcoming page table allocation requests can be serviced by some free page table pool kept at the driver level. I might also be valuable for other aspects of GPU and similar use-cases, like fine-grained memory accounting and resource limiting. Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com> Reviewed-by: Steven Price <steven.price@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20231124142434.1577550-3-boris.brezillon@collabora.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu: Allow passing custom allocators to pgtable driversBoris Brezillon
This will be useful for GPU drivers who want to keep page tables in a pool so they can: - keep freed page tables in a free pool and speed-up upcoming page table allocations - batch page table allocation instead of allocating one page at a time - pre-reserve pages for page tables needed for map/unmap operations, to ensure map/unmap operations don't try to allocate memory in paths they're allowed to block or fail It might also be valuable for other aspects of GPU and similar use-cases, like fine-grained memory accounting and resource limiting. We will extend the Arm LPAE format to support custom allocators in a separate commit. Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com> Reviewed-by: Steven Price <steven.price@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20231124142434.1577550-2-boris.brezillon@collabora.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/vt-d: Set variable intel_dirty_ops to staticKunwu Chan
Fix the following warning: drivers/iommu/intel/iommu.c:302:30: warning: symbol 'intel_dirty_ops' was not declared. Should it be static? This variable is only used in its defining file, so it should be static. Fixes: f35f22cc760e ("iommu/vt-d: Access/Dirty bit support for SS domains") Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Link: https://lore.kernel.org/r/20231120101025.1103404-1-chentao@kylinos.cn Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/vt-d: Fix incorrect cache invalidation for mm notificationLu Baolu
Commit 6bbd42e2df8f ("mmu_notifiers: call invalidate_range() when invalidating TLBs") moved the secondary TLB invalidations into the TLB invalidation functions to ensure that all secondary TLB invalidations happen at the same time as the CPU invalidation and added a flush-all type of secondary TLB invalidation for the batched mode, where a range of [0, -1UL) is used to indicates that the range extends to the end of the address space. However, using an end address of -1UL caused an overflow in the Intel IOMMU driver, where the end address was rounded up to the next page. As a result, both the IOTLB and device ATC were not invalidated correctly. Add a flush all helper function and call it when the invalidation range is from 0 to -1UL, ensuring that the entire caches are invalidated correctly. Fixes: 6bbd42e2df8f ("mmu_notifiers: call invalidate_range() when invalidating TLBs") Cc: stable@vger.kernel.org Cc: Huang Ying <ying.huang@intel.com> Cc: Alistair Popple <apopple@nvidia.com> Tested-by: Luo Yuzhang <yuzhang.luo@intel.com> # QAT Tested-by: Tony Zhu <tony.zhu@intel.com> # DSA Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Alistair Popple <apopple@nvidia.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20231117090933.75267-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/vt-d: Add MTL to quirk list to skip TE disablingAbdul Halim, Mohd Syazwan
The VT-d spec requires (10.4.4 Global Command Register, TE field) that: Hardware implementations supporting DMA draining must drain any in-flight DMA read/write requests queued within the Root-Complex before switching address translation on or off and reflecting the status of the command through the TES field in the Global Status register. Unfortunately, some integrated graphic devices fail to do so after some kind of power state transition. As the result, the system might stuck in iommu_disable_translation(), waiting for the completion of TE transition. Add MTL to the quirk list for those devices and skips TE disabling if the qurik hits. Fixes: b1012ca8dc4f ("iommu/vt-d: Skip TE disabling on quirky gfx dedicated iommu") Cc: stable@vger.kernel.org Signed-off-by: Abdul Halim, Mohd Syazwan <mohd.syazwan.abdul.halim@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20231116022324.30120-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/vt-d: Make context clearing consistent with context mappingLu Baolu
In the iommu probe_device path, domain_context_mapping() allows setting up the context entry for a non-PCI device. However, in the iommu release_device path, domain_context_clear() only clears context entries for PCI devices. Make domain_context_clear() behave consistently with domain_context_mapping() by clearing context entries for both PCI and non-PCI devices. Fixes: 579305f75d34 ("iommu/vt-d: Update to use PCI DMA aliases") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20231114011036.70142-4-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/vt-d: Disable PCI ATS in legacy passthrough modeLu Baolu
When IOMMU hardware operates in legacy mode, the TT field of the context entry determines the translation type, with three supported types (Section 9.3 Context Entry): - DMA translation without device TLB support - DMA translation with device TLB support - Passthrough mode with translated and translation requests blocked Device TLB support is absent when hardware is configured in passthrough mode. Disable the PCI ATS feature when IOMMU is configured for passthrough translation type in legacy (non-scalable) mode. Fixes: 0faa19a1515f ("iommu/vt-d: Decouple PASID & PRI enabling from SVA") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20231114011036.70142-3-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/vt-d: Omit devTLB invalidation requests when TES=0Lu Baolu
The latest VT-d spec indicates that when remapping hardware is disabled (TES=0 in Global Status Register), upstream ATS Invalidation Completion requests are treated as UR (Unsupported Request). Consequently, the spec recommends in section 4.3 Handling of Device-TLB Invalidations that software refrain from submitting any Device-TLB invalidation requests when address remapping hardware is disabled. Verify address remapping hardware is enabled prior to submitting Device- TLB invalidation requests. Fixes: 792fb43ce2c9 ("iommu/vt-d: Enable Intel IOMMU scalable mode by default") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20231114011036.70142-2-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/vt-d: Support enforce_cache_coherency only for empty domainsLu Baolu
The enforce_cache_coherency callback ensures DMA cache coherency for devices attached to the domain. Intel IOMMU supports enforced DMA cache coherency when the Snoop Control bit in the IOMMU's extended capability register is set. Supporting it differs between legacy and scalable modes. In legacy mode, it's supported page-level by setting the SNP field in second-stage page-table entries. In scalable mode, it's supported in PASID-table granularity by setting the PGSNP field in PASID-table entries. In legacy mode, mappings before attaching to a device have SNP fields cleared, while mappings after the callback have them set. This means partial DMAs are cache coherent while others are not. One possible fix is replaying mappings and flipping SNP bits when attaching a domain to a device. But this seems to be over-engineered, given that all real use cases just attach an empty domain to a device. To meet practical needs while reducing mode differences, only support enforce_cache_coherency on a domain without mappings if SNP field is used. Fixes: fc0051cb9590 ("iommu/vt-d: Check domain force_snooping against attached devices") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20231114011036.70142-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu: Clean up open-coded ownership checksRobin Murphy
Some drivers already implement their own defence against the possibility of being given someone else's device. Since this is now taken care of by the core code (and via a slightly different path from the original fwspec-based idea), let's clean them up. Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/58a9879ce3f03562bb061e6714fe6efb554c3907.1700589539.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu: Retire bus opsRobin Murphy
With the rest of the API internals converted, it's time to finally tackle probe_device and how we bootstrap the per-device ops association to begin with. This ends up being disappointingly straightforward, since fwspec users are already doing it in order to find their of_xlate callback, and it works out that we can easily do the equivalent for other drivers too. Then shuffle the remaining awareness of iommu_ops into the couple of core headers that still need it, and breathe a sigh of relief. Ding dong the bus ops are gone! CC: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Christoph Hellwig <hch@lst.de> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/a59011ef65b4b6657cb0b7a388d786b779b61305.1700589539.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu/arm-smmu: Don't register fwnode for legacy bindingRobin Murphy
When using the legacy binding we bypass the of_xlate mechanism, so avoid registering the instance fwnodes which act as keys for that. This will help __iommu_probe_device() to retrieve the registered ops the same way as for x86 etc. when no fwspec has previously been set up by of_xlate. Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/18b0f812a42a74dd6924aea24e68ab409d6e1b52.1700589539.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-11-27iommu: Decouple iommu_domain_alloc() from bus opsRobin Murphy
As the final remaining piece of bus-dependent API, iommu_domain_alloc() can now take responsibility for the "one iommu_ops per bus" rule for itself. It turns out we can't safely make the internal allocation call any more group-based or device-based yet - that will have to wait until the external callers can pass the right thing - but we can at least get as far as deriving "bus ops" based on which driver is actually managing devices on the given bus, rather than whichever driver won the race to register first. This will then leave us able to convert the last of the core internals over to the IOMMU-instance model, allow multiple drivers to register and actually coexist (modulo the above limitation for unmanaged domain users in the short term), and start trying to solve the long-standing iommu_probe_device() mess. Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/6c7313009aae0e39ae2855920990ebf85af4662f.1700589539.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>