summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)Author
2020-01-24iommu/vt-d: Call __dmar_remove_one_dev_info with valid pointerJerry Snitselaar
It is possible for archdata.iommu to be set to DEFER_DEVICE_DOMAIN_INFO or DUMMY_DEVICE_DOMAIN_INFO so check for those values before calling __dmar_remove_one_dev_info. Without a check it can result in a null pointer dereference. This has been seen while booting a kdump kernel on an HP dl380 gen9. Cc: Joerg Roedel <joro@8bytes.org> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: stable@vger.kernel.org # 5.3+ Cc: linux-kernel@vger.kernel.org Fixes: ae23bfb68f28 ("iommu/vt-d: Detach domain before using a private one") Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com> Acked-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-17iommu/amd: Remove unused struct memberAdrian Huang
Commit c805b428f206 ("iommu/amd: Remove amd_iommu_pd_list") removes the global list for the allocated protection domains. The corresponding member 'list' of the protection_domain struct is not used anymore, so it can be removed. Signed-off-by: Adrian Huang <ahuang12@lenovo.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-17iommu/amd: Replace two consecutive readl calls with one readqAdrian Huang
Optimize the reigster reading by using readq instead of the two consecutive readl calls. Signed-off-by: Adrian Huang <ahuang12@lenovo.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-17iommu/vt-d: Don't reject Host Bridge due to scope mismatchjimyan
On a system with two host bridges(0000:00:00.0,0000:80:00.0), iommu initialization fails with DMAR: Device scope type does not match for 0000:80:00.0 This is because the DMAR table reports this device as having scope 2 (ACPI_DMAR_SCOPE_TYPE_BRIDGE): but the device has a type 0 PCI header: 80:00.0 Class 0600: Device 8086:2020 (rev 06) 00: 86 80 20 20 47 05 10 00 06 00 00 06 10 00 00 00 10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 00 00 30: 00 00 00 00 90 00 00 00 00 00 00 00 00 01 00 00 VT-d works perfectly on this system, so there's no reason to bail out on initialization due to this apparent scope mismatch. Add the class 0x06 ("PCI_BASE_CLASS_BRIDGE") as a heuristic for allowing DMAR initialization for non-bridge PCI devices listed with scope bridge. Signed-off-by: jimyan <jimyan@baidu.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Roland Dreier <roland@purestorage.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-15iommu/arm-smmu-v3: Return -EBUSY when trying to re-add a deviceWill Deacon
Although we WARN in arm_smmu_add_device() if the device being added has been added already without a subsequent call to arm_smmu_remove_device(), we still continue half-heartedly, initialising the stream-table for any new StreamIDs that may have magically appeared and re-establishing device links that should still be there from last time. Given that calling ->add_device() twice without removing the device in the meantime is indicative of an error in the caller, just return -EBUSY after warning. Cc: Robin Murphy <robin.murphy@arm.com> Cc: Jean Philippe-Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15iommu/arm-smmu-v3: Improve add_device() error handlingJean-Philippe Brucker
Let add_device() clean up after itself. The iommu_bus_init() function does call remove_device() on error, but other sites (e.g. of_iommu) do not. Don't free level-2 stream tables because we'd have to track if we allocated each of them or if they are used by other endpoints. It's not worth the hassle since they are managed resources. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15iommu/arm-smmu-v3: Use WRITE_ONCE() when changing validity of an STEWill Deacon
If, for some bizarre reason, the compiler decided to split up the write of STE DWORD 0, we could end up making a partial structure valid. Although this probably won't happen, follow the example of the context-descriptor code and use WRITE_ONCE() to ensure atomicity of the write. Reported-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15iommu/arm-smmu-v3: Add second level of context descriptor tableJean-Philippe Brucker
The SMMU can support up to 20 bits of SSID. Add a second level of page tables to accommodate this. Devices that support more than 1024 SSIDs now have a table of 1024 L1 entries (8kB), pointing to tables of 1024 context descriptors (64kB), allocated on demand. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15iommu/arm-smmu-v3: Prepare for handling arm_smmu_write_ctx_desc() failureJean-Philippe Brucker
Second-level context descriptor tables will be allocated lazily in arm_smmu_write_ctx_desc(). Help with handling allocation failure by moving the CD write into arm_smmu_domain_finalise_s1(). Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> [will: Add comment per discussion on list] Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15iommu/arm-smmu-v3: Propagate ssid_bitsJean-Philippe Brucker
Now that we support substream IDs, initialize s1cdmax with the number of SSID bits supported by a master and the SMMU. Context descriptor tables are allocated once for the first master attached to a domain. Therefore attaching multiple devices with different SSID sizes is tricky, and we currently don't support it. As a future improvement it would be nice to at least support attaching a SSID-capable device to a domain that isn't using SSID, by reallocating the SSID table. This would allow supporting a SSID-capable device that is in the same IOMMU group as a bridge, for example. Varying SSID size is less of a concern, since the PCIe specification "highly recommends" that devices supporting PASID implement all 20 bits of it. Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15iommu/arm-smmu-v3: Add support for Substream IDsJean-Philippe Brucker
At the moment, the SMMUv3 driver implements only one stage-1 or stage-2 page directory per device. However SMMUv3 allows more than one address space for some devices, by providing multiple stage-1 page directories. In addition to the Stream ID (SID), that identifies a device, we can now have Substream IDs (SSID) identifying an address space. In PCIe, SID is called Requester ID (RID) and SSID is called Process Address-Space ID (PASID). A complete stage-1 walk goes through the context descriptor table: Stream tables Ctx. Desc. tables Page tables +--------+ ,------->+-------+ ,------->+-------+ : : | : : | : : +--------+ | +-------+ | +-------+ SID->| STE |---' SSID->| CD |---' IOVA->| PTE |--> IPA +--------+ +-------+ +-------+ : : : : : : +--------+ +-------+ +-------+ Rewrite arm_smmu_write_ctx_desc() to modify context descriptor table entries. To keep things simple we only implement one level of context descriptor tables here, but as with stream and page tables, an SSID can be split to index multiple levels of tables. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15iommu/arm-smmu-v3: Add context descriptor tables allocatorsJean-Philippe Brucker
Support for SSID will require allocating context descriptor tables. Move the context descriptor allocation to separate functions. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15iommu/arm-smmu-v3: Prepare arm_smmu_s1_cfg for SSID supportJean-Philippe Brucker
When adding SSID support to the SMMUv3 driver, we'll need to manipulate leaf pasid tables and context descriptors. Extract the context descriptor structure and align with the way stream tables are handled. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15iommu/arm-smmu-v3: Parse PASID devicetree property of platform devicesJean-Philippe Brucker
For platform devices that support SubstreamID (SSID), firmware provides the number of supported SSID bits. Restrict it to what the SMMU supports and cache it into master->ssid_bits, which will also be used for PCI PASID. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-15iommu/arm-smmu-v3: Drop __GFP_ZERO flag from DMA allocationJean-Philippe Brucker
Since commit 518a2f1925c3 ("dma-mapping: zero memory returned from dma_alloc_*"), dma_alloc_* always initializes memory to zero, so there is no need to use dma_zalloc_* or pass the __GFP_ZERO flag anymore. The flag was introduced by commit 04fa26c71be5 ("iommu/arm-smmu: Convert DMA buffer allocations to the managed API"), since the managed API didn't provide a dmam_zalloc_coherent() function. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/arm-smmu: Improve SMR mask testRobin Murphy
Make the SMR mask test more robust against SMR0 being live at probe time, which might happen once we start supporting firmware reservations for framebuffers and suchlike. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Prepare for TTBR1 usageRobin Murphy
Now that we can correctly extract top-level indices without relying on the remaining upper bits being zero, the only remaining impediments to using a given table for TTBR1 are the address validation on map/unmap and the awkward TCR translation granule format. Add a quirk so that we can do the right thing at those points. Tested-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Rationalise VTCR handlingWill Deacon
Commit 05a648cd2dd7 ("iommu/io-pgtable-arm: Rationalise TCR handling") reworked the way in which the TCR register value is returned from the io-pgtable code when targetting the Arm long-descriptor format, in preparation for allowing page-tables to target TTBR1. As it turns out, the new interface is a lot nicer to use, so do the same conversion for the VTCR register even though there is only a single base register for stage-2 translation. Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/arm-smmu: Rename public #defines under ARM_SMMU_ namespaceWill Deacon
Now that we have arm-smmu.h defining various SMMU constants, ensure that they are namespaced with the ARM_SMMU_ prefix in order to avoid conflicts with the CPU, such as the one we're currently bodging around with the TCR. Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Rationalise TCR handlingRobin Murphy
Although it's conceptually nice for the io_pgtable_cfg to provide a standard VMSA TCR value, the reality is that no VMSA-compliant IOMMU looks exactly like an Arm CPU, and they all have various other TCR controls which io-pgtable can't be expected to understand. Thus since there is an expectation that drivers will have to add to the given TCR value anyway, let's strip it down to just the essentials that are directly relevant to io-pgtable's inner workings - namely the various sizes and the walk attributes. Tested-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: Add missing include of bitfield.h] Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Ensure ARM_64_LPAE_S2_TCR_RES1 is unsignedWill Deacon
ARM_64_LPAE_S2_TCR_RES1 is intended to map to bit 31 of the VTCR register, which is required to be set to 1 by the architecture. Unfortunately, we accidentally treat this as a signed quantity which means we also set the upper 32 bits of the VTCR to one, and they are required to be zero. Treat ARM_64_LPAE_S2_TCR_RES1 as unsigned to avoid the unwanted sign-extension up to 64 bits. Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Improve attribute handlingRobin Murphy
By VMSA rules, using Normal Non-Cacheable type with a shareability attribute of anything other than Outer Shareable is liable to lead into unpredictable territory: | Overlaying the shareability attribute (B3-1377, ARM DDI 0406C.c) | | A memory region with a resultant memory type attribute of Normal, and | a resultant cacheability attribute of Inner Non-cacheable, Outer | Non-cacheable, must have a resultant shareability attribute of Outer | Shareable, otherwise shareability is UNPREDICTABLE Although the SMMU architectures seem to give some slightly stronger guarantees of Non-Cacheable output types becoming implicitly Outer Shareable in most cases, we may as well be explicit and not take any chances. It's also weird that LPAE attribute handling is currently split between prot_to_pte() and init_pte() given that it can all be statically determined up-front. Thus, collect *all* the LPAE attributes into prot_to_pte() in order to logically pick the shareability based on the incoming IOMMU API prot value, and tweak the short-descriptor code to stop setting TTBR0.NOS for Non-Cacheable walks. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Support non-coherent stage-2 page tablesWill Deacon
Commit 9e6ea59f3ff3 ("iommu/io-pgtable: Support non-coherent page tables") added support for non-coherent page-table walks to the Arm IOMMU page-table backends. Unfortunately, it left the stage-2 allocator unchanged, so let's hook that up in the same way. Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/io-pgtable-arm: Rationalise TTBRn handlingRobin Murphy
TTBR1 values have so far been redundant since no users implement any support for split address spaces. Crucially, though, one of the main reasons for wanting to do so is to be able to manage each half entirely independently, e.g. context-switching one set of mappings without disturbing the other. Thus it seems unlikely that tying two tables together in a single io_pgtable_cfg would ever be particularly desirable or useful. Streamline the configs to just a single conceptual TTBR value representing the allocated table. This paves the way for future users to support split address spaces by simply allocating a table and dealing with the detailed TTBRn logistics themselves. Tested-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: Drop change to ttbr value] Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/arm-smmu: Fix -Wunused-const-variable warningMasahiro Yamada
For ARCH=arm builds, OF is not necessarily enabled, that is, you can build this driver without CONFIG_OF. When CONFIG_OF is unset, of_match_ptr() is NULL, and arm_smmu_of_match is left orphan. Building it with W=1 emits a warning: drivers/iommu/arm-smmu.c:1904:34: warning: ‘arm_smmu_of_match’ defined but not used [-Wunused-const-variable=] static const struct of_device_id arm_smmu_of_match[] = { ^~~~~~~~~~~~~~~~~ There are two ways to fix this: - annotate arm_smmu_of_match with __maybe_unused (or surround the code with #ifdef CONFIG_OF ... #endif) - stop using of_match_ptr() This commit took the latter solution. It slightly increases the object size, but it is probably not a big deal because arm_smmu_device_dt_probe() is also compiled irrespective of CONFIG_OF. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/arm-smmu-v3: Remove useless of_match_ptr()Masahiro Yamada
The CONFIG option controlling this driver, ARM_SMMU_V3, depends on ARM64, which select's OF. So, CONFIG_OF is always defined when building this driver. of_match_ptr(arm_smmu_of_match) is the same as arm_smmu_of_match. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/arm-smmu-v3: Fix resource_size checkMasahiro Yamada
This is an off-by-one mistake. resource_size() returns res->end - res->start + 1. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10iommu/arm-smmu-v3: Populate VMID field for CMDQ_OP_TLBI_NH_VAShameer Kolothum
CMDQ_OP_TLBI_NH_VA requires VMID and this was missing since commit 1c27df1c0a82 ("iommu/arm-smmu: Use correct address mask for CMD_TLBI_S2_IPA"). Add it back. Fixes: 1c27df1c0a82 ("iommu/arm-smmu: Use correct address mask for CMD_TLBI_S2_IPA") Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-10drivers/iommu: Initialise module 'owner' field in iommu_device_set_ops()Will Deacon
Requiring each IOMMU driver to initialise the 'owner' field of their 'struct iommu_ops' is error-prone and easily forgotten. Follow the example set by PCI and USB by assigning THIS_MODULE automatically when registering the ops structure with IOMMU core. Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-01-07iommu/dma: fix variable 'cookie' set but not usedQian Cai
The commit c18647900ec8 ("iommu/dma: Relax locking in iommu_dma_prepare_msi()") introduced a compliation warning, drivers/iommu/dma-iommu.c: In function 'iommu_dma_prepare_msi': drivers/iommu/dma-iommu.c:1206:27: warning: variable 'cookie' set but not used [-Wunused-but-set-variable] struct iommu_dma_cookie *cookie; ^~~~~~ Fixes: c18647900ec8 ("iommu/dma: Relax locking in iommu_dma_prepare_msi()") Signed-off-by: Qian Cai <cai@lca.pw> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Unlink device if failed to add to groupJon Derrick
If the device fails to be added to the group, make sure to unlink the reference before returning. Signed-off-by: Jon Derrick <jonathan.derrick@intel.com> Fixes: 39ab9555c2411 ("iommu: Add sysfs bindings for struct iommu_device") Acked-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu: Remove device link to group on failureJon Derrick
This adds the missing teardown step that removes the device link from the group when the device addition fails. Signed-off-by: Jon Derrick <jonathan.derrick@intel.com> Fixes: 797a8b4d768c5 ("iommu: Handle default domain attach failure") Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Fix adding non-PCI devices to Intel IOMMUPatrick Steinhardt
Starting with commit fa212a97f3a3 ("iommu/vt-d: Probe DMA-capable ACPI name space devices"), we now probe DMA-capable ACPI name space devices. On Dell XPS 13 9343, which has an Intel LPSS platform device INTL9C60 enumerated via ACPI, this change leads to the following warning: ------------[ cut here ]------------ WARNING: CPU: 1 PID: 1 at pci_device_group+0x11a/0x130 CPU: 1 PID: 1 Comm: swapper/0 Tainted: G T 5.5.0-rc3+ #22 Hardware name: Dell Inc. XPS 13 9343/0310JH, BIOS A20 06/06/2019 RIP: 0010:pci_device_group+0x11a/0x130 Code: f0 ff ff 48 85 c0 49 89 c4 75 c4 48 8d 74 24 10 48 89 ef e8 48 ef ff ff 48 85 c0 49 89 c4 75 af e8 db f7 ff ff 49 89 c4 eb a5 <0f> 0b 49 c7 c4 ea ff ff ff eb 9a e8 96 1e c7 ff 66 0f 1f 44 00 00 RSP: 0000:ffffc0d6c0043cb0 EFLAGS: 00010202 RAX: 0000000000000000 RBX: ffffa3d1d43dd810 RCX: 0000000000000000 RDX: ffffa3d1d4fecf80 RSI: ffffa3d12943dcc0 RDI: ffffa3d1d43dd810 RBP: ffffa3d1d43dd810 R08: 0000000000000000 R09: ffffa3d1d4c04a80 R10: ffffa3d1d4c00880 R11: ffffa3d1d44ba000 R12: 0000000000000000 R13: ffffa3d1d4383b80 R14: ffffa3d1d4c090d0 R15: ffffa3d1d4324530 FS: 0000000000000000(0000) GS:ffffa3d1d6700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000000460a001 CR4: 00000000003606e0 Call Trace: ? iommu_group_get_for_dev+0x81/0x1f0 ? intel_iommu_add_device+0x61/0x170 ? iommu_probe_device+0x43/0xd0 ? intel_iommu_init+0x1fa2/0x2235 ? pci_iommu_init+0x52/0xe7 ? e820__memblock_setup+0x15c/0x15c ? do_one_initcall+0xcc/0x27e ? kernel_init_freeable+0x169/0x259 ? rest_init+0x95/0x95 ? kernel_init+0x5/0xeb ? ret_from_fork+0x35/0x40 ---[ end trace 28473e7abc25b92c ]--- DMAR: ACPI name space devices didn't probe correctly The bug results from the fact that while we now enumerate ACPI devices, we aren't able to handle any non-PCI device when generating the device group. Fix the issue by implementing an Intel-specific callback that returns `pci_device_group` only if the device is a PCI device. Otherwise, it will return a generic device group. Fixes: fa212a97f3a3 ("iommu/vt-d: Probe DMA-capable ACPI name space devices") Signed-off-by: Patrick Steinhardt <ps@pks.im> Cc: stable@vger.kernel.org # v5.3+ Acked-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/amd: Fix typos for PPR macrosAdrian Huang
The bit 13 and bit 14 of the IOMMU control register are PPRLogEn and PPRIntEn. They are related to PPR (Peripheral Page Request) instead of 'PPF'. Fix them accrodingly. Signed-off-by: Adrian Huang <ahuang12@lenovo.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/amd: Remove local variablesAdrian Huang
The usage of the local variables 'range' and 'misc' was removed from commit 226e889b20a9 ("iommu/amd: Remove first/last_device handling") and commit 23c742db2171 ("iommu/amd: Split out PCI related parts of IOMMU initialization"). So, remove them accrodingly. Signed-off-by: Adrian Huang <ahuang12@lenovo.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: debugfs: Add support to show page table internalsLu Baolu
Export page table internals of the domain attached to each device. Example of such dump on a Skylake machine: $ sudo cat /sys/kernel/debug/iommu/intel/domain_translation_struct [ ... ] Device 0000:00:14.0 with pasid 0 @0x15f3d9000 IOVA_PFN PML5E PML4E 0x000000008ced0 | 0x0000000000000000 0x000000015f3da003 0x000000008ced1 | 0x0000000000000000 0x000000015f3da003 0x000000008ced2 | 0x0000000000000000 0x000000015f3da003 0x000000008ced3 | 0x0000000000000000 0x000000015f3da003 0x000000008ced4 | 0x0000000000000000 0x000000015f3da003 0x000000008ced5 | 0x0000000000000000 0x000000015f3da003 0x000000008ced6 | 0x0000000000000000 0x000000015f3da003 0x000000008ced7 | 0x0000000000000000 0x000000015f3da003 0x000000008ced8 | 0x0000000000000000 0x000000015f3da003 0x000000008ced9 | 0x0000000000000000 0x000000015f3da003 PDPE PDE PTE 0x000000015f3db003 0x000000015f3dc003 0x000000008ced0003 0x000000015f3db003 0x000000015f3dc003 0x000000008ced1003 0x000000015f3db003 0x000000015f3dc003 0x000000008ced2003 0x000000015f3db003 0x000000015f3dc003 0x000000008ced3003 0x000000015f3db003 0x000000015f3dc003 0x000000008ced4003 0x000000015f3db003 0x000000015f3dc003 0x000000008ced5003 0x000000015f3db003 0x000000015f3dc003 0x000000008ced6003 0x000000015f3db003 0x000000015f3dc003 0x000000008ced7003 0x000000015f3db003 0x000000015f3dc003 0x000000008ced8003 0x000000015f3db003 0x000000015f3dc003 0x000000008ced9003 [ ... ] Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Use iova over first levelLu Baolu
After we make all map/unmap paths support first level page table. Let's turn it on if hardware supports scalable mode. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Update first level super page capabilityLu Baolu
First-level translation may map input addresses to 4-KByte pages, 2-MByte pages, or 1-GByte pages. Support for 4-KByte pages and 2-Mbyte pages are mandatory for first-level translation. Hardware support for 1-GByte page is reported through the FL1GP field in the Capability Register. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Make first level IOVA canonicalLu Baolu
First-level translation restricts the input-address to a canonical address (i.e., address bits 63:N have the same value as address bit [N-1], where N is 48-bits with 4-level paging and 57-bits with 5-level paging). (section 3.6 in the spec) This makes first level IOVA canonical by using IOVA with bit [N-1] always cleared. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Flush PASID-based iotlb for iova over first levelLu Baolu
When software has changed first-level tables, it should invalidate the affected IOTLB and the paging-structure-caches using the PASID- based-IOTLB Invalidate Descriptor defined in spec 6.5.2.4. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Setup pasid entries for iova over first levelLu Baolu
Intel VT-d in scalable mode supports two types of page tables for IOVA translation: first level and second level. The IOMMU driver can choose one from both for IOVA translation according to the use case. This sets up the pasid entry if a domain is selected to use the first-level page table for iova translation. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Add PASID_FLAG_FL5LP for first-level pasid setupLu Baolu
Current intel_pasid_setup_first_level() use 5-level paging for first level translation if CPUs use 5-level paging mode too. This makes sense for SVA usages since the page table is shared between CPUs and IOMMUs. But it makes no sense if we only want to use first level for IOVA translation. Add PASID_FLAG_FL5LP bit in the flags which indicates whether the 5-level paging mode should be used. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Add set domain DOMAIN_ATTR_NESTING attrLu Baolu
This adds the Intel VT-d specific callback of setting DOMAIN_ATTR_NESTING domain attribution. It is necessary to let the VT-d driver know that the domain represents a virtual machine which requires the IOMMU hardware to support nested translation mode. Return success if the IOMMU hardware suports nested mode, otherwise failure. Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Identify domains using first level page tableLu Baolu
This checks whether a domain should use the first level page table for map/unmap and marks it in the domain structure. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Loose requirement for flush queue initializatonLu Baolu
Currently if flush queue initialization fails, we return error or enforce the system-wide strict mode. These are unnecessary because we always check the existence of a flush queue before queuing any iova's for lazy flushing. Printing a informational message is enough. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Avoid iova flush queue in strict modeLu Baolu
If Intel IOMMU strict mode is enabled by users, it's unnecessary to create the iova flush queue. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: trace: Extend map_sg trace eventLu Baolu
Current map_sg stores trace message in a coarse manner. This extends it so that more detailed messages could be traced. The map_sg trace message looks like: map_sg: dev=0000:00:17.0 [1/9] dev_addr=0xf8f90000 phys_addr=0x158051000 size=4096 map_sg: dev=0000:00:17.0 [2/9] dev_addr=0xf8f91000 phys_addr=0x15a858000 size=4096 map_sg: dev=0000:00:17.0 [3/9] dev_addr=0xf8f92000 phys_addr=0x15aa13000 size=4096 map_sg: dev=0000:00:17.0 [4/9] dev_addr=0xf8f93000 phys_addr=0x1570f1000 size=8192 map_sg: dev=0000:00:17.0 [5/9] dev_addr=0xf8f95000 phys_addr=0x15c6d0000 size=4096 map_sg: dev=0000:00:17.0 [6/9] dev_addr=0xf8f96000 phys_addr=0x157194000 size=4096 map_sg: dev=0000:00:17.0 [7/9] dev_addr=0xf8f97000 phys_addr=0x169552000 size=4096 map_sg: dev=0000:00:17.0 [8/9] dev_addr=0xf8f98000 phys_addr=0x169dde000 size=4096 map_sg: dev=0000:00:17.0 [9/9] dev_addr=0xf8f99000 phys_addr=0x148351000 size=4096 Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Misc macro clean up for SVMJacob Pan
Use combined macros for_each_svm_dev() to simplify SVM device iteration and error checking. Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Avoid sending invalid page responseJacob Pan
Page responses should only be sent when last page in group (LPIG) or private data is present in the page request. This patch avoids sending invalid descriptors. Fixes: 5d308fc1ecf53 ("iommu/vt-d: Add 256-bit invalidation descriptor support") Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-01-07iommu/vt-d: Replace Intel specific PASID allocator with IOASIDJacob Pan
Make use of generic IOASID code to manage PASID allocation, free, and lookup. Replace Intel specific code. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>