summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)Author
2020-10-06dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h>Christoph Hellwig
Merge dma-contiguous.h into dma-map-ops.h, after removing the comment describing the contiguous allocator into kernel/dma/contigous.c. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-10-06dma-mapping: split <linux/dma-mapping.h>Christoph Hellwig
Split out all the bits that are purely for dma_map_ops implementations and related code into a new <linux/dma-map-ops.h> header so that they don't get pulled into all the drivers. That also means the architecture specific <asm/dma-mapping.h> is not pulled in by <linux/dma-mapping.h> any more, which leads to a missing includes that were pulled in by the x86 or arm versions in a few not overly portable drivers. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-10-01iommu/vt-d: Fix lockdep splat in iommu_flush_dev_iotlb()Lu Baolu
Lock(&iommu->lock) without disabling irq causes lockdep warnings. [ 12.703950] ======================================================== [ 12.703962] WARNING: possible irq lock inversion dependency detected [ 12.703975] 5.9.0-rc6+ #659 Not tainted [ 12.703983] -------------------------------------------------------- [ 12.703995] systemd-udevd/284 just changed the state of lock: [ 12.704007] ffffffffbd6ff4d8 (device_domain_lock){..-.}-{2:2}, at: iommu_flush_dev_iotlb.part.57+0x2e/0x90 [ 12.704031] but this lock took another, SOFTIRQ-unsafe lock in the past: [ 12.704043] (&iommu->lock){+.+.}-{2:2} [ 12.704045] and interrupts could create inverse lock ordering between them. [ 12.704073] other info that might help us debug this: [ 12.704085] Possible interrupt unsafe locking scenario: [ 12.704097] CPU0 CPU1 [ 12.704106] ---- ---- [ 12.704115] lock(&iommu->lock); [ 12.704123] local_irq_disable(); [ 12.704134] lock(device_domain_lock); [ 12.704146] lock(&iommu->lock); [ 12.704158] <Interrupt> [ 12.704164] lock(device_domain_lock); [ 12.704174] *** DEADLOCK *** Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200927062428.13713-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01iommu/vt-d: Check UAPI data processed by IOMMU coreJacob Pan
IOMMU generic layer already does sanity checks on UAPI data for version match and argsz range based on generic information. This patch adjusts the following data checking responsibilities: - removes the redundant version check from VT-d driver - removes the check for vendor specific data size - adds check for the use of reserved/undefined flags Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-7-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01iommu/uapi: Handle data and argsz filled by usersJacob Pan
IOMMU user APIs are responsible for processing user data. This patch changes the interface such that user pointers can be passed into IOMMU code directly. Separate kernel APIs without user pointers are introduced for in-kernel users of the UAPI functionality. IOMMU UAPI data has a user filled argsz field which indicates the data length of the structure. User data is not trusted, argsz must be validated based on the current kernel data size, mandatory data size, and feature flags. User data may also be extended, resulting in possible argsz increase. Backward compatibility is ensured based on size and flags (or the functional equivalent fields) checking. This patch adds sanity checks in the IOMMU layer. In addition to argsz, reserved/unused fields in padding, flags, and version are also checked. Details are documented in Documentation/userspace-api/iommu.rst Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-6-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01iommu/uapi: Rename uapi functionsJacob Pan
User APIs such as iommu_sva_unbind_gpasid() may also be used by the kernel. Since we introduced user pointer to the UAPI functions, in-kernel callers cannot share the same APIs. In-kernel callers are also trusted, there is no need to validate the data. We plan to have two flavors of the same API functions, one called through ioctls, carrying a user pointer and one called directly with valid IOMMU UAPI structs. To differentiate both, let's rename existing functions with an iommu_uapi_ prefix. Suggested-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/1601051567-54787-5-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01iommu/uapi: Use named union for user dataJacob Pan
IOMMU UAPI data size is filled by the user space which must be validated by the kernel. To ensure backward compatibility, user data can only be extended by either re-purpose padding bytes or extend the variable sized union at the end. No size change is allowed before the union. Therefore, the minimum size is the offset of the union. To use offsetof() on the union, we must make it named. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/linux-iommu/20200611145518.0c2817d6@x1.home/ Link: https://lore.kernel.org/r/1601051567-54787-4-git-send-email-jacob.jun.pan@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-10-01iommu/amd: Fix the overwritten field in IVMD headerAdrian Huang
Commit 387caf0b759a ("iommu/amd: Treat per-device exclusion ranges as r/w unity-mapped regions") accidentally overwrites the 'flags' field in IVMD (struct ivmd_header) when the I/O virtualization memory definition is associated with the exclusion range entry. This leads to the corrupted IVMD table (incorrect checksum). The kdump kernel reports the invalid checksum: ACPI BIOS Warning (bug): Incorrect checksum in table [IVRS] - 0x5C, should be 0x60 (20200717/tbprint-177) AMD-Vi: [Firmware Bug]: IVRS invalid checksum Fix the above-mentioned issue by modifying the 'struct unity_map_entry' member instead of the IVMD header. Cleanup: The *exclusion_range* functions are not used anymore, so get rid of them. Fixes: 387caf0b759a ("iommu/amd: Treat per-device exclusion ranges as r/w unity-mapped regions") Reported-and-tested-by: Baoquan He <bhe@redhat.com> Signed-off-by: Adrian Huang <ahuang12@lenovo.com> Cc: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20200926102602.19177-1-adrianhuang0701@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-29iommu/qcom: add missing put_device() call in qcom_iommu_of_xlate()Yu Kuai
if of_find_device_by_node() succeed, qcom_iommu_of_xlate() doesn't have a corresponding put_device(). Thus add put_device() to fix the exception handling for this function implementation. Fixes: 0ae349a0f33f ("iommu/qcom: Add qcom_iommu") Acked-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Yu Kuai <yukuai3@huawei.com> Link: https://lore.kernel.org/r/20200929014037.2436663-1-yukuai3@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Add SVA device featureJean-Philippe Brucker
Implement the IOMMU device feature callbacks to support the SVA feature. At the moment dev_has_feat() returns false since I/O Page Faults and BTM aren't yet implemented. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-12-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Check for SVA featuresJean-Philippe Brucker
Aggregate all sanity-checks for sharing CPU page tables with the SMMU under a single ARM_SMMU_FEAT_SVA bit. For PCIe SVA, users also need to check FEAT_ATS and FEAT_PRI. For platform SVA, they will have to check FEAT_STALLS. Introduce ARM_SMMU_FEAT_BTM (Broadcast TLB Maintenance), but don't enable it at the moment. Since the entire VMID space is shared with the CPU, enabling DVM (by clearing SMMU_CR2.PTM) could result in over-invalidation and affect performance of stage-2 mappings. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20200918101852.582559-11-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Seize private ASIDJean-Philippe Brucker
The SMMU has a single ASID space, the union of shared and private ASID sets. This means that the SMMU driver competes with the arch allocator for ASIDs. Shared ASIDs are those of Linux processes, allocated by the arch, and contribute in broadcast TLB maintenance. Private ASIDs are allocated by the SMMU driver and used for "classic" map/unmap DMA. They require command-queue TLB invalidations. When we pin down an mm_context and get an ASID that is already in use by the SMMU, it belongs to a private context. We used to simply abort the bind, but this is unfair to users that would be unable to bind a few seemingly random processes. Try to allocate a new private ASID for the context, and make the old ASID shared. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-10-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Share process page tablesJean-Philippe Brucker
With Shared Virtual Addressing (SVA), we need to mirror CPU TTBR, TCR, MAIR and ASIDs in SMMU contexts. Each SMMU has a single ASID space split into two sets, shared and private. Shared ASIDs correspond to those obtained from the arch ASID allocator, and private ASIDs are used for "classic" map/unmap DMA. A possible conflict happens when trying to use a shared ASID that has already been allocated for private use by the SMMU driver. This will be addressed in a later patch by replacing the private ASID. At the moment we return -EBUSY. Each mm_struct shared with the SMMU will have a single context descriptor. Add a refcount to keep track of this. It will be protected by the global SVA lock. Introduce a new arm-smmu-v3-sva.c file and the CONFIG_ARM_SMMU_V3_SVA option to let users opt in SVA support. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-9-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Move definitions to a headerJean-Philippe Brucker
Allow sharing structure definitions with the upcoming SVA support for Arm SMMUv3, by moving them to a separate header. We could surgically extract only what is needed but keeping all definitions in one place looks nicer. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-8-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/io-pgtable-arm: Move some definitions to a headerJean-Philippe Brucker
Extract some of the most generic TCR defines, so they can be reused by the page table sharing code. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918101852.582559-6-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Ensure queue is read after updating prod pointerZhou Wang
Reading the 'prod' MMIO register in order to determine whether or not there is valid data beyond 'cons' for a given queue does not provide sufficient dependency ordering, as the resulting access is address dependent only on 'cons' and can therefore be speculated ahead of time, potentially allowing stale data to be read by the CPU. Use readl() instead of readl_relaxed() when updating the shadow copy of the 'prod' pointer, so that all speculated memory reads from the corresponding queue can occur only from valid slots. Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com> Link: https://lore.kernel.org/r/1601281922-117296-1-git-send-email-wangzhou1@hisilicon.com [will: Use readl() instead of explicit barrier. Update 'cons' side to match.] Signed-off-by: Will Deacon <will@kernel.org>
2020-09-25dma-iommu: implement ->alloc_noncoherentChristoph Hellwig
Implement the alloc_noncoherent method to provide memory that is neither coherent not contiguous. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-25dma-mapping: add a new dma_alloc_pages APIChristoph Hellwig
This API is the equivalent of alloc_pages, except that the returned memory is guaranteed to be DMA addressable by the passed in device. The implementation will also be used to provide a more sensible replacement for DMA_ATTR_NON_CONSISTENT flag. Additionally dma_alloc_noncoherent is switched over to use dma_alloc_pages as its backend. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> (MIPS part)
2020-09-25Merge branch 'master' of ↵Christoph Hellwig
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux into dma-mapping-for-next Pull in the latest 5.9 tree for the commit to revert the V4L2_FLAG_MEMORY_NON_CONSISTENT uapi addition.
2020-09-24ACPI: Do not create new NUMA domains from ACPI static tables that are not SRATJonathan Cameron
Several ACPI static tables contain references to proximity domains. ACPI 6.3 has clarified that only entries in SRAT may define a new domain (sec 5.2.16). Those tables described in the ACPI spec have additional clarifying text. NFIT: Table 5-132, "Integer that represents the proximity domain to which the memory belongs. This number must match with corresponding entry in the SRAT table." HMAT: Table 5-145, "... This number must match with the corresponding entry in the SRAT table's processor affinity structure ... if the initiator is a processor, or the Generic Initiator Affinity Structure if the initiator is a generic initiator". IORT and DMAR are defined by external specifications. Intel Virtualization Technology for Directed I/O Rev 3.1 does not make any explicit statements, but the general SRAT statement above will still apply. https://software.intel.com/sites/default/files/managed/c5/15/vt-directed-io-spec.pdf IO Remapping Table, Platform Design Document rev D, also makes not explicit statement, but refers to ACPI SRAT table for more information and again the generic SRAT statement above applies. https://developer.arm.com/documentation/den0049/d/ In conclusion, any proximity domain specified in these tables, should be a reference to a proximity domain also found in SRAT, and they should not be able to instantiate a new domain. Hence we switch to pxm_to_node() which will only return existing nodes. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Barry Song <song.bao.hua@hisilicon.com> Reviewed-by: Hanjun Guo <guohanjun@huawei.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-09-24iommu/amd: Re-purpose Exclusion range registers to support SNP CWWBSuravee Suthikulpanit
When the IOMMU SNP support bit is set in the IOMMU Extended Features register, hardware re-purposes the following registers: 1. IOMMU Exclusion Base register (MMIO offset 0020h) to Completion Wait Write-Back (CWWB) Base register 2. IOMMU Exclusion Range Limit (MMIO offset 0028h) to Completion Wait Write-Back (CWWB) Range Limit register and requires the IOMMU CWWB semaphore base and range to be programmed in the register offset 0020h and 0028h accordingly. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Link: https://lore.kernel.org/r/20200923121347.25365-4-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-24iommu/amd: Add support for RMP_PAGE_FAULT and RMP_HW_ERRSuravee Suthikulpanit
IOMMU SNP support introduces two new IOMMU events: * RMP Page Fault event * RMP Hardware Error event Hence, add reporting functions for these events. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Link: https://lore.kernel.org/r/20200923121347.25365-3-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-24iommu/amd: Use 4K page for completion wait write-back semaphoreSuravee Suthikulpanit
IOMMU SNP support requires the completion wait write-back semaphore to be implemented using a 4K-aligned page, where the page address is to be programmed into the newly introduced MMIO base/range registers. This new scheme uses a per-iommu atomic variable to store the current semaphore value, which is incremented for every completion wait command. Since this new scheme is also compatible with non-SNP mode, generalize the driver to use 4K page for completion-wait semaphore in both modes. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Link: https://lore.kernel.org/r/20200923121347.25365-2-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-24iommu/tegra-smmu: Allow to group clients in same swgroupNicolin Chen
There can be clients using the same swgroup in DT, for example i2c0 and i2c1. The current driver will add them to separate IOMMU groups, though it has implemented device_group() callback which is to group devices using different swgroups like DC and DCB. All clients having the same swgroup should be also added to the same IOMMU group so as to share an asid. Otherwise, the asid register may get overwritten every time a new device is attached. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20200911071643.17212-4-nicoleotsuka@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-24iommu/tegra-smmu: Fix iova->phys translationNicolin Chen
IOVA might not be always 4KB aligned. So tegra_smmu_iova_to_phys function needs to add on the lower 12-bit offset from input iova. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20200911071643.17212-3-nicoleotsuka@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-24iommu/tegra-smmu: Do not use PAGE_SHIFT and PAGE_MASKNicolin Chen
PAGE_SHIFT and PAGE_MASK are defined corresponding to the page size for CPU virtual addresses, which means PAGE_SHIFT could be a number other than 12, but tegra-smmu maintains fixed 4KB IOVA pages and has fixed [21:12] bit range for PTE entries. So this patch replaces all PAGE_SHIFT/PAGE_MASK references with the macros defined with SMMU_PTE_SHIFT. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20200911071643.17212-2-nicoleotsuka@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-24iommu/vt-d: Use device numa domain if RHSA is missingLu Baolu
If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR table, we could default to the device NUMA domain as fall back. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Link: https://lore.kernel.org/r/20200904010303.2961-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20200922060843.31546-2-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-24iommu/exynos: add missing put_device() call in exynos_iommu_of_xlate()Yu Kuai
if of_find_device_by_node() succeed, exynos_iommu_of_xlate() doesn't have a corresponding put_device(). Thus add put_device() to fix the exception handling for this function implementation. Fixes: aa759fd376fb ("iommu/exynos: Add callback for initializing devices from device tree") Signed-off-by: Yu Kuai <yukuai3@huawei.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200918011335.909141-1-yukuai3@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-21iommu/arm-smmu: Constify some helpersRob Clark
Sprinkle a few `const`s where helpers don't need write access. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21iommu/arm-smmu: Prepare for the adreno-smmu implementationJordan Crouse
Do a bit of prep work to add the upcoming adreno-smmu implementation. Add an hook to allow the implementation to choose which context banks to allocate. Move some of the common structs to arm-smmu.h in anticipation of them being used by the implementations and update some of the existing hooks to pass more information that the implementation will need. These modifications will be used by the upcoming Adreno SMMU implementation to identify the GPU device and properly configure it for pagetable switching. Co-developed-by: Rob Clark <robdclark@chromium.org> Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21iommu/arm-smmu: Add support for split pagetablesJordan Crouse
Enable TTBR1 for a context bank if IO_PGTABLE_QUIRK_ARM_TTBR1 is selected by the io-pgtable configuration. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21iommu/arm-smmu: Pass io-pgtable config to implementation specific functionJordan Crouse
Construct the io-pgtable config before calling the implementation specific init_context function and pass it so the implementation specific function can get a chance to change it before the io-pgtable is created. Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21iommu/arm-smmu-v3: Fix endianness annotationsJean-Philippe Brucker
When building with C=1, sparse reports some issues regarding endianness annotations: arm-smmu-v3.c:221:26: warning: cast to restricted __le64 arm-smmu-v3.c:221:24: warning: incorrect type in assignment (different base types) arm-smmu-v3.c:221:24: expected restricted __le64 [usertype] arm-smmu-v3.c:221:24: got unsigned long long [usertype] arm-smmu-v3.c:229:20: warning: incorrect type in argument 1 (different base types) arm-smmu-v3.c:229:20: expected restricted __le64 [usertype] *[assigned] dst arm-smmu-v3.c:229:20: got unsigned long long [usertype] *ent arm-smmu-v3.c:229:25: warning: incorrect type in argument 2 (different base types) arm-smmu-v3.c:229:25: expected unsigned long long [usertype] *[assigned] src arm-smmu-v3.c:229:25: got restricted __le64 [usertype] * arm-smmu-v3.c:396:20: warning: incorrect type in argument 1 (different base types) arm-smmu-v3.c:396:20: expected restricted __le64 [usertype] *[assigned] dst arm-smmu-v3.c:396:20: got unsigned long long * arm-smmu-v3.c:396:25: warning: incorrect type in argument 2 (different base types) arm-smmu-v3.c:396:25: expected unsigned long long [usertype] *[assigned] src arm-smmu-v3.c:396:25: got restricted __le64 [usertype] * arm-smmu-v3.c:1349:32: warning: invalid assignment: |= arm-smmu-v3.c:1349:32: left side has type restricted __le64 arm-smmu-v3.c:1349:32: right side has type unsigned long arm-smmu-v3.c:1396:53: warning: incorrect type in argument 3 (different base types) arm-smmu-v3.c:1396:53: expected restricted __le64 [usertype] *dst arm-smmu-v3.c:1396:53: got unsigned long long [usertype] *strtab arm-smmu-v3.c:1424:39: warning: incorrect type in argument 1 (different base types) arm-smmu-v3.c:1424:39: expected unsigned long long [usertype] *[assigned] strtab arm-smmu-v3.c:1424:39: got restricted __le64 [usertype] *l2ptr While harmless, they are incorrect and could hide actual errors during development. Fix them. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200918141856.629722-1-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21iommu/io-pgtable-arm: Clean up faulty sanity checkRobin Murphy
Checking for a nonzero dma_pfn_offset was a quick shortcut to validate whether the DMA == phys assumption could hold at all. Checking for a non-NULL dma_range_map is not quite equivalent, since a map may be present to describe a limited DMA window even without an offset, and thus this check can now yield false positives. However, it only ever served to short-circuit going all the way through to __arm_lpae_alloc_pages(), failing the canonical test there, and having a bit more to clean up. As such, we can simply remove it without loss of correctness. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-18iommu/dma: Handle init_iova_flush_queue() failure in dma-iommu pathTom Murphy
The init_iova_flush_queue() function can fail if we run out of memory. Fall back to noflush queue if it fails. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200910122539.3662-1-murphyt7@tcd.ie Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-18iommu/amd: Restore IRTE.RemapEn bit for amd_iommu_activate_guest_modeSuravee Suthikulpanit
Commit e52d58d54a32 ("iommu/amd: Use cmpxchg_double() when updating 128-bit IRTE") removed an assumption that modify_irte_ga always set the valid bit, which requires the callers to set the appropriate value for the struct irte_ga.valid bit before calling the function. Similar to the commit 26e495f34107 ("iommu/amd: Restore IRTE.RemapEn bit after programming IRTE"), which is for the function amd_iommu_deactivate_guest_mode(). The same change is also needed for the amd_iommu_activate_guest_mode(). Otherwise, this could trigger IO_PAGE_FAULT for the VFIO based VMs with AVIC enabled. Fixes: e52d58d54a321 ("iommu/amd: Use cmpxchg_double() when updating 128-bit IRTE") Reported-by: Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Tested-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Cc: Joao Martins <joao.m.martins@oracle.com> Link: https://lore.kernel.org/r/20200916111720.43913-1-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-18iommu/tegra-smmu: Fix tlb_maskNicolin Chen
The "num_tlb_lines" might not be a power-of-2 value, being 48 on Tegra210 for example. So the current way of calculating tlb_mask using the num_tlb_lines is not correct: tlb_mask=0x5f in case of num_tlb_lines=48, which will trim a setting of 0x30 (48) to 0x10. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20200917113155.13438-2-nicoleotsuka@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-18iommu/pamu: Replace use of kzfree with kfree_sensitiveAlex Dewar
kzfree() is effectively deprecated as of commit 453431a54934 ("mm, treewide: rename kzfree() to kfree_sensitive()") and is now simply an alias for kfree_sensitive(). So just replace it with kfree_sensitive(). Signed-off-by: Alex Dewar <alex.dewar90@gmail.com> Link: https://lore.kernel.org/r/20200911135325.66156-1-alex.dewar90@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-18iommu/renesas: Update help description for IPMMU_VMSA configLad Prabhakar
ipmmu-vmsa driver is also used on Renesas RZ/G{1,2} Soc's, update the same to reflect the help description for IPMMU_VMSA config. Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Chris Paterson <Chris.Paterson2@renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20200911101912.20701-1-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-18iommu/amd: Fix potential @entry null derefJoao Martins
After commit 26e495f34107 ("iommu/amd: Restore IRTE.RemapEn bit after programming IRTE"), smatch warns: drivers/iommu/amd/iommu.c:3870 amd_iommu_deactivate_guest_mode() warn: variable dereferenced before check 'entry' (see line 3867) Fix this by moving the @valid assignment to after @entry has been checked for NULL. Fixes: 26e495f34107 ("iommu/amd: Restore IRTE.RemapEn bit after programming IRTE") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20200910171621.12879-1-joao.m.martins@oracle.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-18iommu/mediatek: Add support for MT8167Fabien Parent
Add support for the IOMMU on MT8167 Signed-off-by: Fabien Parent <fparent@baylibre.com> Reviewed-by: Yong Wu <yong.wu@mediatek.com> Link: https://lore.kernel.org/r/20200907101649.1573134-3-fparent@baylibre.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-18iommu/mediatek: Add flag for legacy ivrp paddrFabien Parent
Add a new flag in order to select which IVRP_PADDR format is used by an SoC. Signed-off-by: Fabien Parent <fparent@baylibre.com> Reviewed-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Link: https://lore.kernel.org/r/20200907101649.1573134-2-fparent@baylibre.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2020-09-17x86/mmu: Allocate/free a PASIDFenghua Yu
A PASID is allocated for an "mm" the first time any thread binds to an SVA-capable device and is freed from the "mm" when the SVA is unbound by the last thread. It's possible for the "mm" to have different PASID values in different binding/unbinding SVA cycles. The mm's PASID (non-zero for valid PASID or 0 for invalid PASID) is propagated to a per-thread PASID MSR for all threads within the mm through IPI, context switch, or inherited. This is done to ensure that a running thread has the right PASID in the MSR matching the mm's PASID. [ bp: s/SVM/SVA/g; massage. ] Suggested-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/1600187413-163670-10-git-send-email-fenghua.yu@intel.com
2020-09-17iommu/vt-d: Change flags type to unsigned int in binding mmFenghua Yu
"flags" passed to intel_svm_bind_mm() is a bit mask and should be defined as "unsigned int" instead of "int". Change its type to "unsigned int". Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Joerg Roedel <jroedel@suse.de> Link: https://lkml.kernel.org/r/1600187413-163670-3-git-send-email-fenghua.yu@intel.com
2020-09-17drm, iommu: Change type of pasid to u32Fenghua Yu
PASID is defined as a few different types in iommu including "int", "u32", and "unsigned int". To be consistent and to match with uapi definitions, define PASID and its variations (e.g. max PASID) as "u32". "u32" is also shorter and a little more explicit than "unsigned int". No PASID type change in uapi although it defines PASID as __u64 in some places. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Felix Kuehling <Felix.Kuehling@amd.com> Acked-by: Joerg Roedel <jroedel@suse.de> Link: https://lkml.kernel.org/r/1600187413-163670-2-git-send-email-fenghua.yu@intel.com
2020-09-17dma-mapping: introduce DMA range map, supplanting dma_pfn_offsetJim Quinlan
The new field 'dma_range_map' in struct device is used to facilitate the use of single or multiple offsets between mapping regions of cpu addrs and dma addrs. It subsumes the role of "dev->dma_pfn_offset" which was only capable of holding a single uniform offset and had no region bounds checking. The function of_dma_get_range() has been modified so that it takes a single argument -- the device node -- and returns a map, NULL, or an error code. The map is an array that holds the information regarding the DMA regions. Each range entry contains the address offset, the cpu_start address, the dma_start address, and the size of the region. of_dma_configure() is the typical manner to set range offsets but there are a number of ad hoc assignments to "dev->dma_pfn_offset" in the kernel driver code. These cases now invoke the function dma_direct_set_offset(dev, cpu_addr, dma_addr, size). Signed-off-by: Jim Quinlan <james.quinlan@broadcom.com> [hch: various interface cleanups] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Tested-by: Mathieu Poirier <mathieu.poirier@linaro.org> Tested-by: Nathan Chancellor <natechancellor@gmail.com>
2020-09-16iommu/amd: Remove domain search for PCI/MSIThomas Gleixner
Now that the domain can be retrieved through device::msi_domain the domain search for PCI_MSI[X] is not longer required. Remove it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20200826112334.400700807@linutronix.de
2020-09-16iommu/vt-d: Remove domain search for PCI/MSI[X]Thomas Gleixner
Now that the domain can be retrieved through device::msi_domain the domain search for PCI_MSI[X] is not longer required. Remove it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200826112334.305699301@linutronix.de
2020-09-16iommm/amd: Store irq domain in struct deviceThomas Gleixner
As the next step to make X86 utilize the direct MSI irq domain operations store the irq domain pointer in the device struct when a device is probed. It only overrides the irqdomain of devices which are handled by a regular PCI/MSI irq domain which protects PCI devices behind special busses like VMD which have their own irq domain. No functional change. It just avoids the redirection through arch_*_msi_irqs() and allows the PCI/MSI core to directly invoke the irq domain alloc/free functions instead of having to look up the irq domain for every single MSI interupt. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200826112333.806328762@linutronix.de
2020-09-16iommm/vt-d: Store irq domain in struct deviceThomas Gleixner
As a first step to make X86 utilize the direct MSI irq domain operations store the irq domain pointer in the device struct when a device is probed. This is done from dmar_pci_bus_add_dev() because it has to work even when DMA remapping is disabled. It only overrides the irqdomain of devices which are handled by a regular PCI/MSI irq domain which protects PCI devices behind special busses like VMD which have their own irq domain. No functional change. It just avoids the redirection through arch_*_msi_irqs() and allows the PCI/MSI core to directly invoke the irq domain alloc/free functions instead of having to look up the irq domain for every single MSI interupt. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200826112333.714566121@linutronix.de