summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)Author
2024-09-02iommu/vt-d: Factor out invalidation descriptor compositionTina Zhang
Separate the logic for constructing IOTLB and device TLB invalidation descriptors from the qi_flush interfaces. New helpers, qi_desc(), are introduced to encapsulate this common functionality. Moving descriptor composition code to new helpers enables its reuse in the upcoming qi_batch interfaces. No functional changes are intended. Signed-off-by: Tina Zhang <tina.zhang@intel.com> Link: https://lore.kernel.org/r/20240815065221.50328-2-tina.zhang@intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/vt-d: Unconditionally flush device TLB for pasid table updatesLu Baolu
The caching mode of an IOMMU is irrelevant to the behavior of the device TLB. Previously, commit <304b3bde24b5> ("iommu/vt-d: Remove caching mode check before device TLB flush") removed this redundant check in the domain unmap path. Checking the caching mode before flushing the device TLB after a pasid table entry is updated is unnecessary and can lead to inconsistent behavior. Extends this consistency by removing the caching mode check in the pasid table update path. Suggested-by: Yi Liu <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20240820030208.20020-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/vt-d: Move PCI PASID enablement to probe pathLu Baolu
Currently, PCI PASID is enabled alongside PCI ATS when an iommu domain is attached to the device and disabled when the device transitions to block translation mode. This approach is inappropriate as PCI PASID is a device feature independent of the type of the attached domain. Enable PCI PASID during the IOMMU device probe and disables it during the release path. Suggested-by: Yi Liu <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com> Tested-by: Yi Liu <yi.l.liu@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240819051805.116936-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/vt-d: Fix potential lockup if qi_submit_sync called with 0 countSanjay K Kumar
If qi_submit_sync() is invoked with 0 invalidation descriptors (for instance, for DMA draining purposes), we can run into a bug where a submitting thread fails to detect the completion of invalidation_wait. Subsequently, this led to a soft lockup. Currently, there is no impact by this bug on the existing users because no callers are submitting invalidations with 0 descriptors. This fix will enable future users (such as DMA drain) calling qi_submit_sync() with 0 count. Suppose thread T1 invokes qi_submit_sync() with non-zero descriptors, while concurrently, thread T2 calls qi_submit_sync() with zero descriptors. Both threads then enter a while loop, waiting for their respective descriptors to complete. T1 detects its completion (i.e., T1's invalidation_wait status changes to QI_DONE by HW) and proceeds to call reclaim_free_desc() to reclaim all descriptors, potentially including adjacent ones of other threads that are also marked as QI_DONE. During this time, while T2 is waiting to acquire the qi->q_lock, the IOMMU hardware may complete the invalidation for T2, setting its status to QI_DONE. However, if T1's execution of reclaim_free_desc() frees T2's invalidation_wait descriptor and changes its status to QI_FREE, T2 will not observe the QI_DONE status for its invalidation_wait and will indefinitely remain stuck. This soft lockup does not occur when only non-zero descriptors are submitted.In such cases, invalidation descriptors are interspersed among wait descriptors with the status QI_IN_USE, acting as barriers. These barriers prevent the reclaim code from mistakenly freeing descriptors belonging to other submitters. Considered the following example timeline: T1 T2 ======================================== ID1 WD1 while(WD1!=QI_DONE) unlock lock WD1=QI_DONE* WD2 while(WD2!=QI_DONE) unlock lock WD1==QI_DONE? ID1=QI_DONE WD2=DONE* reclaim() ID1=FREE WD1=FREE WD2=FREE unlock soft lockup! T2 never sees QI_DONE in WD2 Where: ID = invalidation descriptor WD = wait descriptor * Written by hardware The root of the problem is that the descriptor status QI_DONE flag is used for two conflicting purposes: 1. signal a descriptor is ready for reclaim (to be freed) 2. signal by the hardware that a wait descriptor is complete The solution (in this patch) is state separation by using QI_FREE flag for #1. Once a thread's invalidation descriptors are complete, their status would be set to QI_FREE. The reclaim_free_desc() function would then only free descriptors marked as QI_FREE instead of those marked as QI_DONE. This change ensures that T2 (from the previous example) will correctly observe the completion of its invalidation_wait (marked as QI_DONE). Signed-off-by: Sanjay K Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240728210059.1964602-1-jacob.jun.pan@linux.intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/vt-d: Cleanup si_domainLu Baolu
The static identity domain has been introduced, rendering the si_domain obsolete. Remove si_domain and cleanup the code accordingly. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240809055431.36513-8-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/vt-d: Add support for static identity domainLu Baolu
Software determines VT-d hardware support for passthrough translation by inspecting the capability register. If passthrough translation is not supported, the device is instructed to use DMA domain for its default domain. Add a global static identity domain with guaranteed attach semantics for IOMMUs that support passthrough translation mode. The global static identity domain is a dummy domain without corresponding dmar_domain structure. Consequently, the device's info->domain will be NULL with the identity domain is attached. Refactor the code accordingly. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240809055431.36513-7-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/vt-d: Factor out helpers from domain_context_mapping_one()Lu Baolu
Extract common code from domain_context_mapping_one() into new helpers, making it reusable by other functions such as the upcoming identity domain implementation. No intentional functional changes. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20240809055431.36513-6-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/vt-d: Remove has_iotlb_device flagLu Baolu
The has_iotlb_device flag was used to indicate if a domain had attached devices with ATS enabled. Domains without this flag didn't require device TLB invalidation during unmap operations, optimizing performance by avoiding unnecessary device iteration. With the introduction of cache tags, this flag is no longer needed. The code to iterate over attached devices was removed by commit 06792d067989 ("iommu/vt-d: Cleanup use of iommu_flush_iotlb_psi()"). Remove has_iotlb_device to avoid unnecessary code. Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240809055431.36513-5-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/vt-d: Always reserve a domain ID for identity setupLu Baolu
We will use a global static identity domain. Reserve a static domain ID for it. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20240809055431.36513-4-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/vt-d: Remove identity mappings from si_domainLu Baolu
As the driver has enforced DMA domains for devices managed by an IOMMU hardware that doesn't support passthrough translation mode, there is no need for static identity mappings in the si_domain. Remove the identity mapping code to avoid dead code. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240809055431.36513-3-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/vt-d: Require DMA domain if hardware not support passthroughLu Baolu
The iommu core defines the def_domain_type callback to query the iommu driver about hardware capability and quirks. The iommu driver should declare IOMMU_DOMAIN_DMA requirement for hardware lacking pass-through capability. Earlier VT-d hardware implementations did not support pass-through translation mode. The iommu driver relied on a paging domain with all physical system memory addresses identically mapped to the same IOVA to simulate pass-through translation before the def_domain_type was introduced and it has been kept until now. It's time to adjust it now to make the Intel iommu driver follow the def_domain_type semantics. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20240809055431.36513-2-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-09-02iommu/tegra241-cmdqv: Fix -Wformat-truncation warnings in lvcmdq_error_headerNicolin Chen
Kernel test robot reported a few trucation warnings at the snprintf: drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c: In function ‘tegra241_vintf_free_lvcmdq’: drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c:239:56: warning: ‘%u’ directive output may be truncated writing between 1 and 5 bytes into a region of size between 3 and 11 [-Wformat-truncation=] 239 | snprintf(header, hlen, "VINTF%u: VCMDQ%u/LVCMDQ%u: ", | ^~ drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c:239:32: note: directive argument in the range [0, 65535] 239 | snprintf(header, hlen, "VINTF%u: VCMDQ%u/LVCMDQ%u: ", | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c:239:9: note: ‘snprintf’ output between 25 and 37 bytes into a destination of size 32 239 | snprintf(header, hlen, "VINTF%u: VCMDQ%u/LVCMDQ%u: ", | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 240 | vcmdq->vintf->idx, vcmdq->idx, vcmdq->lidx); Fix by bumping up the size of the header to hold more characters. Fixes: 918eb5c856f6 ("iommu/arm-smmu-v3: Add in-kernel support for NVIDIA Tegra241 (Grace) CMDQV") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202409020406.7ed5uojF-lkp@intel.com/ Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/20240902055745.629456-1-nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-09-01fault-inject: improve build for CONFIG_FAULT_INJECTION=nJani Nikula
The fault-inject.h users across the kernel need to add a lot of #ifdef CONFIG_FAULT_INJECTION to cater for shortcomings in the header. Make fault-inject.h self-contained for CONFIG_FAULT_INJECTION=n, and add stubs for DECLARE_FAULT_ATTR(), setup_fault_attr(), should_fail_ex(), and should_fail() to allow removal of conditional compilation. [akpm@linux-foundation.org: repair fallout from no longer including debugfs.h into fault-inject.h] [akpm@linux-foundation.org: fix drivers/misc/xilinx_tmr_inject.c] [akpm@linux-foundation.org: Add debugfs.h inclusion to more files, per Stephen] Link: https://lkml.kernel.org/r/20240813121237.2382534-1-jani.nikula@intel.com Fixes: 6ff1cb355e62 ("[PATCH] fault-injection capabilities infrastructure") Signed-off-by: Jani Nikula <jani.nikula@intel.com> Cc: Akinobu Mita <akinobu.mita@gmail.com> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Rob Clark <robdclark@gmail.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-08-31Merge tag 'iommu-fixes-v6.11-rc5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux Pull iommu fixes from Joerg Roedel: - Fix a device-stall problem in bad io-page-fault setups (faults received from devices with no supporting domain attached). - Context flush fix for Intel VT-d. - Do not allow non-read+non-write mapping through iommufd as most implementations can not handle that. - Fix a possible infinite-loop issue in map_pages() path. - Add Jean-Philippe as reviewer for SMMUv3 SVA support * tag 'iommu-fixes-v6.11-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux: MAINTAINERS: Add Jean-Philippe as SMMUv3 SVA reviewer iommu: Do not return 0 from map_pages if it doesn't do anything iommufd: Do not allow creating areas without READ or WRITE iommu/vt-d: Fix incorrect domain ID in context flush helper iommu: Handle iommu faults for a bad iopf setup
2024-08-30iommu/arm-smmu-v3-test: Test masters with stall enabledMostafa Saleh
At the moment, the SMMUv3 unit tests assume ATS is always enabled, although this is sufficient to test hitless/non-hitless transitions, but exercising other features is useful to check ste/cd population logic (for example the .get_used logic). Add an enum where bits define features per-master, at the moment there is only ATS and STALLs which are mutually exclusive, but this would make it easier to extend with other features in the future. Also, Add 2 more tests for s1 <-> s2 transitions with stalls enabled. Signed-off-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20240830110349.797399-3-smostafa@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/arm-smmu-v3: Match Stall behaviour for S2Mostafa Saleh
According to the spec (ARM IHI 0070 F.b), in "5.5 Fault configuration (A, R, S bits)": A STE with stage 2 translation enabled and STE.S2S == 0 is considered ILLEGAL if SMMU_IDR0.STALL_MODEL == 0b10. Also described in the pseudocode “SteIllegal()” if STE.Config == '11x' then [..] if eff_idr0_stall_model == '10' && STE.S2S == '0' then // stall_model forcing stall, but S2S == 0 return TRUE; Which means, S2S must be set when stall model is "ARM_SMMU_FEAT_STALL_FORCE", but currently the driver ignores that. Although, the driver can do the minimum and only set S2S for “ARM_SMMU_FEAT_STALL_FORCE”, it is more consistent to match S1 behaviour, which also sets it for “ARM_SMMU_FEAT_STALL” if the master has requested stalls. Also, since S2 stalls are enabled now, report them to the IOMMU layer and for VFIO devices it will fail anyway as VFIO doesn’t register an iopf handler. Signed-off-by: Mostafa Saleh <smostafa@google.com> Link: https://lore.kernel.org/r/20240830110349.797399-2-smostafa@google.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/tegra241-cmdqv: Limit CMDs for VCMDQs of a guest owned VINTFNicolin Chen
When VCMDQs are assigned to a VINTF owned by a guest (HYP_OWN bit unset), only TLB and ATC invalidation commands are supported by the VCMDQ HW. So, implement the new cmdq->supports_cmd op to scan the input cmd in order to make sure that it is supported by the selected queue. Note that the guest VM shouldn't have HYP_OWN bit being set regardless of guest kernel driver writing it or not, i.e. the hypervisor running in the host OS should wire this bit to zero when trapping a write access to this VINTF_CONFIG register from a guest kernel. Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/8160292337059b91271045800e5c62f7295e2c24.1724970714.git.nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/arm-smmu-v3: Start a new batch if new command is not supportedNicolin Chen
The VCMDQ in the tegra241-cmdqv driver has a guest mode that supports only a few invalidation commands. A batch is initialized with a cmdq, so it has to confirm whether a new command is supported or not. Add a supports_cmd function pointer to the cmdq structure, where the vcmdq driver should hook a command scan function. Add an inline helper too so it can be used by both sides. If a new command is not supported, simply issue the existing batch and re- init it as a new batch. Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/aafb24b881504f18c5d0c7c15f2134e40ad2c486.1724970714.git.nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/arm-smmu-v3: Add in-kernel support for NVIDIA Tegra241 (Grace) CMDQVNate Watterson
NVIDIA's Tegra241 Soc has a CMDQ-Virtualization (CMDQV) hardware, extending the standard ARM SMMU v3 IP to support multiple VCMDQs with virtualization capabilities. In terms of command queue, they are very like a standard SMMU CMDQ (or ECMDQs), but only support CS_NONE in the CS field of CMD_SYNC. Add a new tegra241-cmdqv driver, and insert its structure pointer into the existing arm_smmu_device, and then add related function calls in the SMMUv3 driver to interact with the CMDQV driver. In the CMDQV driver, add a minimal part for the in-kernel support: reserve VINTF0 for in-kernel use, and assign some of the VCMDQs to the VINTF0, and select one VCMDQ based on the current CPU ID to execute supported commands. This multi-queue design for in-kernel use gives some limited improvements: up to 20% reduction of invalidation time was measured by a multi-threaded DMA unmap benchmark, compared to a single queue. The other part of the CMDQV driver will be user-space support that gives a hypervisor running on the host OS to talk to the driver for virtualization use cases, allowing VMs to use VCMDQs without trappings, i.e. no VM Exits. This is designed based on IOMMUFD, and its RFC series is also under review. It will provide a guest OS a bigger improvement: 70% to 90% reductions of TLB invalidation time were measured by DMA unmap tests running in a guest, compared to nested SMMU CMDQ (with trappings). As the initial version, the CMDQV driver only supports ACPI configurations. Signed-off-by: Nate Watterson <nwatterson@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Co-developed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/dce50490b2c10b7254fb36aa73ed7ffd812b283a.1724970714.git.nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/arm-smmu-v3: Add struct arm_smmu_impl_opsJason Gunthorpe
Mimicing the arm-smmu (v2) driver, introduce a struct arm_smmu_impl_ops to accommodate impl routines. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/8fe9f3805568aabf771fc6706c116459016bf62d.1724970714.git.nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/arm-smmu-v3: Add acpi_smmu_iort_probe_model for implNicolin Chen
For model-specific implementation, repurpose the acpi_smmu_get_options() to a wider acpi_smmu_acpi_probe_model(). A new model can add to the list in this new function. Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/79716299829aeab2e55b8c7932f2634b209bb4d5.1724970714.git.nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/arm-smmu-v3: Add ARM_SMMU_OPT_TEGRA241_CMDQVNicolin Chen
The CMDQV extension in NVIDIA Tegra241 SoC only supports CS_NONE in the CS field of CMD_SYNC. Add a new SMMU option to accommodate that. Suggested-by: Will Deacon <will@kernel.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/a3cb9bb2429fbae4a59f7ef517614d226763d717.1724970714.git.nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/arm-smmu-v3: Make symbols public for CONFIG_TEGRA241_CMDQVNicolin Chen
The symbols __arm_smmu_cmdq_skip_err(), arm_smmu_init_one_queue(), and arm_smmu_cmdq_init() need to be used by the tegra241-cmdqv compilation unit in a following patch. Remove the static and put prototypes in the header. Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/c4f2aa5f5f40a2e7c68b132c6d3171d6403de57a.1724970714.git.nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/arm-smmu-v3: Pass in cmdq pointer to arm_smmu_cmdq_initNicolin Chen
So that this function can be used by other cmdqs than &smmu->cmdq only. Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/e11a3c0bde172c9652c2946f12bc2ceed4c3a355.1724970714.git.nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/arm-smmu-v3: Pass in cmdq pointer to arm_smmu_cmdq_build_sync_cmdNicolin Chen
The CMDQV extension on NVIDIA Tegra241 SoC only supports CS_NONE in the CS field of CMD_SYNC, v.s. standard SMMU CMDQ. Pass in the cmdq pointer directly, so the function can identify a different cmdq implementation. Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/723288287997b6dfbcd2a904d2c11e9b23f82250.1724970714.git.nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/arm-smmu-v3: Issue a batch of commands to the same cmdqNicolin Chen
The driver calls in different places the arm_smmu_get_cmdq() helper, and it's fine to do so since the helper always returns the single SMMU CMDQ. However, with NVIDIA CMDQV extension or SMMU ECMDQ, there can be multiple cmdqs in the system to select one from. And either case requires a batch of commands to be issued to the same cmdq. Thus, a cmdq has to be decided in the higher-level callers. Add a cmdq pointer in arm_smmu_cmdq_batch structure, and decide the cmdq when initializing the batch. Pass its pointer down to the bottom function. Update __arm_smmu_cmdq_issue_cmd() accordingly for single command issuers. Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Link: https://lore.kernel.org/r/2cbf5ddefb6ea611e48d67c642271bd24421eb21.1724970714.git.nicolinc@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-30iommu/io-pgtable-arm: Optimise non-coherent unmapAshish Mhetre
The current __arm_lpae_unmap() function calls dma_sync() on individual PTEs after clearing them. Overall unmap performance can be improved by around 25% for large buffer sizes by combining the syncs for adjacent leaf entries. Optimize the unmap time by clearing all the leaf entries and issuing a single dma_sync() for them. Below is detailed analysis of average unmap latency(in us) with and without this optimization obtained by running dma_map_benchmark for different buffer sizes. UnMap Latency(us) Size Without With % gain with optimiztion optimization optimization 4KB 3 3 0 8KB 4 3.8 5 16KB 6.1 5.4 11.48 32KB 10.2 8.5 16.67 64KB 18.5 14.9 19.46 128KB 35 27.5 21.43 256KB 67.5 52.2 22.67 512KB 127.9 97.2 24.00 1MB 248.6 187.4 24.62 2MB 65.5 65.5 0 4MB 119.2 119 0.17 Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Ashish Mhetre <amhetre@nvidia.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20240806105135.218089-1-amhetre@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-08-30iommu: Allow ATS to work on VFs when the PF uses IDENTITYJason Gunthorpe
PCI ATS has a global Smallest Translation Unit field that is located in the PF but shared by all of the VFs. The expectation is that the STU will be set to the root port's global STU capability which is driven by the IO page table configuration of the iommu HW. Today it becomes set when the iommu driver first enables ATS. Thus, to enable ATS on the VF, the PF must have already had the correct STU programmed, even if ATS is off on the PF. Unfortunately the PF only programs the STU when the PF enables ATS. The iommu drivers tend to leave ATS disabled when IDENTITY translation is being used. Thus we can get into a state where the PF is setup to use IDENTITY with the DMA API while the VF would like to use VFIO with a PAGING domain and have ATS turned on. This fails because the PF never loaded a PAGING domain and so it never setup the STU, and the VF can't do it. The simplest solution is to have the iommu driver set the ATS STU when it probes the device. This way the ATS STU is loaded immediately at boot time to all PFs and there is no issue when a VF comes to use it. Add a new call pci_prepare_ats() which should be called by iommu drivers in their probe_device() op for every PCI device if the iommu driver supports ATS. This will setup the STU based on whatever page size capability the iommu HW has. Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/0-v1-0fb4d2ab6770+7e706-ats_vf_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-08-27Merge branch 'nesting_reserved_regions' into iommufd.git for-nextJason Gunthorpe
Nicolin Chen says: ========= IOMMU_RESV_SW_MSI is a unique region defined by an IOMMU driver. Though it is eventually used by a device for address translation to an MSI location (including nested cases), practically it is a universal region across all domains allocated for the IOMMU that defines it. Currently IOMMUFD core fetches and reserves the region during an attach to an hwpt_paging. It works with a hwpt_paging-only case, but might not work with a nested case where a device could directly attach to a hwpt_nested, bypassing the hwpt_paging attachment. Move the enforcement forward, to the hwpt_paging allocation function. Then clean up all the SW_MSI related things in the attach/replace routine. ========= Based on v6.11-rc5 for dependencies. * nesting_reserved_regions: (562 commits) iommufd/device: Enforce reserved IOVA also when attached to hwpt_nested Linux 6.11-rc5 ...
2024-08-27iommufd/device: Enforce reserved IOVA also when attached to hwpt_nestedNicolin Chen
Currently, device reserved regions are only enforced when the device is attached to an hwpt_paging. In other words, if the device gets attached to an hwpt_nested directly, the parent hwpt_paging of the hwpt_nested's would not enforce those reserved IOVAs. This works for most of reserved region types, but not for IOMMU_RESV_SW_MSI, which is a unique software defined window, required by a nesting case too to setup an MSI doorbell on the parent stage-2 hwpt/domain. Kevin pointed out in 1 that: 1) there is no usage using up closely the entire IOVA space yet, 2) guest may change the viommu mode to switch between nested and paging then VMM has to take all devices' reserved regions into consideration anyway, when composing the GPA space. So it would be actually convenient for us to also enforce reserved IOVA onto the parent hwpt_paging, when attaching a device to an hwpt_nested. Repurpose the existing attach/replace_paging helpers to attach device's reserved IOVAs exclusively. Add a new find_hwpt_paging helper, which is only used by these reserved IOVA functions, to allow an IOMMUFD_OBJ_HWPT_NESTED hwpt to redirect to its parent hwpt_paging. Return a NULL in these two helpers for any new HWPT type in the future. Link: https://patch.msgid.link/r/20240807003446.3740368-1-nicolinc@nvidia.com Link: https://lore.kernel.org/all/BN9PR11MB5276497781C96415272E6FED8CB12@BN9PR11MB5276.namprd11.prod.outlook.com/ #1 Suggested-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-08-27iommufd/selftest: Fix buffer read overrrun in the dirty testJason Gunthorpe
test_bit() is used to read the memory storing the bitmap, however test_bit() always uses a unsigned long 8 byte access. If the bitmap is not an aligned size of 64 bits this will now trigger a KASAN warning reading past the end of the buffer. Properly round the buffer allocation to an unsigned long size. Continue to copy_from_user() using a byte granularity. Fixes: 9560393b830b ("iommufd/selftest: Fix iommufd_test_dirty() to handle <u8 bitmaps") Link: https://patch.msgid.link/r/0-v1-113e8d9e7861+5ae-iommufd_kasan_jgg@nvidia.com Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-08-27iommu/arm-smmu-qcom: Work around SDM845 Adreno SMMU w/ 16K pagesKonrad Dybcio
SDM845's Adreno SMMU is unique in that it actually advertizes support for 16K (and 32M) pages, which doesn't hold for newer SoCs. This however, seems either broken in the hardware implementation, the hypervisor middleware that abstracts the SMMU, or there's a bug in the Linux kernel somewhere down the line that nobody managed to track down. Booting SDM845 with 16K page sizes and drm/msm results in: *** gpu fault: ttbr0=0000000000000000 iova=000100000000c000 dir=READ type=TRANSLATION source=CP (0,0,0,0) right after loading the firmware. The GPU then starts spitting out illegal intstruction errors, as it's quite obvious that it got a bogus pointer. Moreover, it seems like this issue also concerns other implementations of SMMUv2 on Qualcomm SoCs, such as the one on SC7180. Hide 16K support on such instances to work around this. Reported-by: Sumit Semwal <sumit.semwal@linaro.org> Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20240824-topic-845_gpu_smmu-v2-1-a302b8acc052@quicinc.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-26iommufd: Reorder include filesNicolin Chen
Reorder include files to alphabetic order to simplify maintenance, and separate local headers and global headers with a blank line. No functional change intended. Link: https://patch.msgid.link/r/7524b037cc05afe19db3c18f863253e1d1554fa2.1722644866.git.nicolinc@nvidia.com Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-08-26iommu: Do not return 0 from map_pages if it doesn't do anythingJason Gunthorpe
These three implementations of map_pages() all succeed if a mapping is requested with no read or write. Since they return back to __iommu_map() leaving the mapped output as 0 it triggers an infinite loop. Therefore nothing is using no-access protection bits. Further, VFIO and iommufd rely on iommu_iova_to_phys() to get back PFNs stored by map, if iommu_map() succeeds but iommu_iova_to_phys() fails that will create serious bugs. Thus remove this never used "nothing to do" concept and just fail map immediately. Fixes: e5fc9753b1a8 ("iommu/io-pgtable: Add ARMv7 short descriptor support") Fixes: e1d3c0fd701d ("iommu: add ARM LPAE page table allocator") Fixes: 745ef1092bcf ("iommu/io-pgtable: Move Apple DART support to its own file") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/2-v1-1211e1294c27+4b1-iommu_no_prot_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-08-26iommufd: Do not allow creating areas without READ or WRITEJason Gunthorpe
This results in passing 0 or just IOMMU_CACHE to iommu_map(). Most of the page table formats don't like this: amdv1 - -EINVAL armv7s - returns 0, doesn't update mapped arm-lpae - returns 0 doesn't update mapped dart - returns 0, doesn't update mapped VT-D - returns -EINVAL Unfortunately the three formats that return 0 cause serious problems: - Returning ret = but not uppdating mapped from domain->map_pages() causes an infinite loop in __iommu_map() - Not writing ioptes means that VFIO/iommufd have no way to recover them and we will have memory leaks and worse during unmap Since almost nothing can support this, and it is a useless thing to do, block it early in iommufd. Cc: stable@kernel.org Fixes: aad37e71d5c4 ("iommufd: IOCTLs for the io_pagetable") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/1-v1-1211e1294c27+4b1-iommu_no_prot_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-08-26iommu/vt-d: Fix incorrect domain ID in context flush helperLu Baolu
The helper intel_context_flush_present() is designed to flush all related caches when a context entry with the present bit set is modified. It currently retrieves the domain ID from the context entry and uses it to flush the IOTLB and context caches. This is incorrect when the context entry transitions from present to non-present, as the domain ID field is cleared before calling the helper. Fix it by passing the domain ID programmed in the context entry before the change to intel_context_flush_present(). This ensures that the correct domain ID is used for cache invalidation. Fixes: f90584f4beb8 ("iommu/vt-d: Add helper to flush caches for context change") Reported-by: Alex Williamson <alex.williamson@redhat.com> Closes: https://lore.kernel.org/linux-iommu/20240814162726.5efe1a6e.alex.williamson@redhat.com/ Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Jacob Pan <jacob.pan@linux.microsoft.com> Link: https://lore.kernel.org/r/20240815124857.70038-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-08-23iommu/arm-smmu-qcom: hide last LPASS SMMU context bank from linuxMarc Gonzalez
On qcom msm8998, writing to the last context bank of lpass_q6_smmu (base address 0x05100000) produces a system freeze & reboot. The hardware/hypervisor reports 13 context banks for the LPASS SMMU on msm8998, but only the first 12 are accessible... Override the number of context banks [ 2.546101] arm-smmu 5100000.iommu: probing hardware configuration... [ 2.552439] arm-smmu 5100000.iommu: SMMUv2 with: [ 2.558945] arm-smmu 5100000.iommu: stage 1 translation [ 2.563627] arm-smmu 5100000.iommu: address translation ops [ 2.568923] arm-smmu 5100000.iommu: non-coherent table walk [ 2.574566] arm-smmu 5100000.iommu: (IDR0.CTTW overridden by FW configuration) [ 2.580220] arm-smmu 5100000.iommu: stream matching with 12 register groups [ 2.587263] arm-smmu 5100000.iommu: 13 context banks (0 stage-2 only) [ 2.614447] arm-smmu 5100000.iommu: Supported page sizes: 0x63315000 [ 2.621358] arm-smmu 5100000.iommu: Stage-1: 36-bit VA -> 36-bit IPA [ 2.627772] arm-smmu 5100000.iommu: preserved 0 boot mappings Specifically, the crashes occur here: qsmmu->bypass_cbndx = smmu->num_context_banks - 1; arm_smmu_cb_write(smmu, qsmmu->bypass_cbndx, ARM_SMMU_CB_SCTLR, 0); and here: arm_smmu_write_context_bank(smmu, i); arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_FSR, ARM_SMMU_CB_FSR_FAULT); It is likely that FW reserves the last context bank for its own use, thus a simple work-around is: DON'T USE IT in Linux. If we decrease the number of context banks, last one will be "hidden". Signed-off-by: Marc Gonzalez <mgonzalez@freebox.fr> Reviewed-by: Caleb Connolly <caleb.connolly@linaro.org> Reviewed-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20240820-smmu-v3-1-2f71483b00ec@freebox.fr Signed-off-by: Will Deacon <will@kernel.org>
2024-08-23iommu/amd: Update PASID, GATS, GLX, SNPAVICSUP feature related macrosSuravee Suthikulpanit
Clean up and reorder them according to the bit index. There is no functional change. Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240816221650.62295-1-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-08-23iommu: Handle iommu faults for a bad iopf setupPranjal Shrivastava
The iommu_report_device_fault function was updated to return void while assuming that drivers only need to call iommu_report_device_fault() for reporting an iopf. This implementation causes following problems: 1. The drivers rely on the core code to call it's page_reponse, however, when a fault is received and no fault capable domain is attached / iopf_param is NULL, the ops->page_response is NOT called causing the device to stall in case the fault type was PAGE_REQ. 2. The arm_smmu_v3 driver relies on the returned value to log errors returning void from iommu_report_device_fault causes these events to be missed while logging. Modify the iommu_report_device_fault function to return -EINVAL for cases where no fault capable domain is attached or iopf_param was NULL and calls back to the driver (ops->page_response) in case the fault type was IOMMU_FAULT_PAGE_REQ. The returned value can be used by the drivers to log the fault/event as needed. Reported-by: Kunkun Jiang <jiangkunkun@huawei.com> Closes: https://lore.kernel.org/all/6147caf0-b9a0-30ca-795e-a1aa502a5c51@huawei.com/ Fixes: 3dfa64aecbaf ("iommu: Make iommu_report_device_fault() return void") Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Pranjal Shrivastava <praan@google.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20240816104906.1010626-1-praan@google.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-08-22dma-mapping: direct calls for dma-iommuLeon Romanovsky
Directly call into dma-iommu just like we have been doing for dma-direct for a while. This avoids the indirect call overhead for IOMMU ops and removes the need to have DMA ops entirely for many common configurations. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2024-08-20Merge tag 'for-linus-iommufd' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jgg/iommufd Pull iommufd fixes from Jason Gunthorpe: - Incorrect error unwind in iommufd_device_do_replace() - Correct a sparse warning missing static * tag 'for-linus-iommufd' of git://git.kernel.org/pub/scm/linux/kernel/git/jgg/iommufd: iommufd/selftest: Make dirty_ops static iommufd/device: Fix hwpt at err_unresv in iommufd_device_do_replace()
2024-08-19iommufd/selftest: Make dirty_ops staticJinjie Ruan
The sparse tool complains as follows: drivers/iommu/iommufd/selftest.c:277:30: warning: symbol 'dirty_ops' was not declared. Should it be static? This symbol is not used outside of selftest.c, so marks it static. Fixes: 266ce58989ba ("iommufd/selftest: Test IOMMU_HWPT_ALLOC_DIRTY_TRACKING") Link: https://patch.msgid.link/r/20240819120007.3884868-1-ruanjinjie@huawei.com Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Reviewed-by: Yi Liu <yi.l.liu@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-08-16iommu/arm-smmu-v3: Fix a NULL vs IS_ERR() checkDan Carpenter
The arm_smmu_domain_alloc() function returns error pointers on error. It doesn't return NULL. Update the error checking to match. Fixes: 52acd7d8a413 ("iommu/arm-smmu-v3: Add support for domain_alloc_user fn") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/9208cd0d-8105-40df-93e9-bdcdf0d55eec@stanley.mountain Signed-off-by: Will Deacon <will@kernel.org>
2024-08-16iommu/arm-smmu-v3: Remove the unused empty definitionZhang Zekun
arm_smmu_sva_remove_dev_pasid() has been removed since commit d38c28dbefee ("iommu/arm-smmu-v3: Put the SVA mmu notifier in the smmu_domain"), remain the empty definition untouched in header file, which is used when CONFIG_ARM_SMMU_V3_SVA is not set. So, let's remove the unused definition. Signed-off-by: Zhang Zekun <zhangzekun11@huawei.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240815111504.48810-1-zhangzekun11@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-16iommu/arm-smmu: Un-demote unhandled-fault msgRob Clark
Previously this was dev_err_ratelimited() but it got changed to a ratelimited dev_dbg(). Change it back to dev_err(). Fixes: d525b0af0c3b ("iommu/arm-smmu: Pretty-print context fault related regs") Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Pranjal Shrivastava <praan@google.com> Link: https://lore.kernel.org/r/20240809172716.10275-1-robdclark@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2024-08-13iommu/amd: Add blocked domain supportVasant Hegde
Create global blocked domain with attach device ops. It will clear the DTE so that all DMA from device will be aborted. Signed-off-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240722115452.5976-1-vasant.hegde@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-08-07iommu/vt-d: Cleanup apic_printk()Thomas Gleixner
Use the new apic_pr_verbose() helper. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Tested-by: Breno Leitao <leitao@debian.org> Link: https://lore.kernel.org/all/20240802155440.843266805@linutronix.de
2024-08-02iommu: Restore lost return in iommu_report_device_fault()Barak Biber
When iommu_report_device_fault gets called with a partial fault it is supposed to collect the fault into the group and then return. Instead the return was accidently deleted which results in trying to process the fault and an eventual crash. Deleting the return was a typo, put it back. Fixes: 3dfa64aecbaf ("iommu: Make iommu_report_device_fault() return void") Signed-off-by: Barak Biber <bbiber@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/0-v1-e7153d9c8cee+1c6-iommu_fault_fix_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2024-07-29iommufd/device: Fix hwpt at err_unresv in iommufd_device_do_replace()Nicolin Chen
The rewind routine should remove the reserved iovas added to the new hwpt. Fixes: 89db31635c87 ("iommufd: Derive iommufd_hwpt_paging from iommufd_hw_pagetable") Cc: stable@vger.kernel.org Link: https://patch.msgid.link/r/20240718050130.1956804-1-nicolinc@nvidia.com Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-07-27Merge tag 'iommu-fixes-v6.11-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux Pull iommu fixes from Will Deacon: "We're still resolving a regression with the handling of unexpected page faults on SMMUv3, but we're not quite there with a fix yet. - Fix NULL dereference when freeing domain in Unisoc SPRD driver - Separate assignment statements with semicolons in AMD page-table code - Fix Tegra erratum workaround when the CPU is using 16KiB pages" * tag 'iommu-fixes-v6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux: iommu: arm-smmu: Fix Tegra workaround for PAGE_SIZE mappings iommu/amd: Convert comma to semicolon iommu: sprd: Avoid NULL deref in sprd_iommu_hw_en