summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)Author
2021-02-01iommu/mediatek: Move geometry.aperture updating into domain_finaliseYong Wu
Move the domain geometry.aperture updating into domain_finalise. This is a preparing patch for updating the domain region. We know the detailed iova region in the attach_device. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-26-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Move domain_finalise into attach_deviceYong Wu
Currently domain_alloc only has a parameter(type), We have no chance to input some special data. This patch moves the domain_finalise into attach_device which has the device information, then could update the domain's geometry.aperture ranges for each a device. Strictly, I should use the data from mtk_iommu_get_m4u_data as the parameter of mtk_iommu_domain_finalise in this patch. but dom->data only is used in tlb ops in which the data is get from the m4u_list, thus it is ok here. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-25-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Adjust the structureYong Wu
Add "struct mtk_iommu_data *" in the "struct mtk_iommu_domain", reduce the call mtk_iommu_get_m4u_data(). No functional change. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-24-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Support report iova 34bit translation fault in ISRYong Wu
If the iova is over 32bit, the fault status register bit is a little different. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-23-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Support up to 34bit iova in tlb flushYong Wu
If the iova is 34bit, the iova[32][33] is the bit0/1 in the tlb flush register. Add a new macro for this. In the macro, since (iova + size - 1) may be end with 0xfff, then the bit0/1 always is 1, thus add a mask. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-22-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Add power-domain operationYong Wu
In the previous SoC, the M4U HW is in the EMI power domain which is always on. the latest M4U is in the display power domain which may be turned on/off, thus we have to add pm_runtime interface for it. When the engine work, the engine always enable the power and clocks for smi-larb/smi-common, then the M4U's power will always be powered on automatically via the device link with smi-common. Note: we don't enable the M4U power in iommu_map/unmap for tlb flush. If its power already is on, of course it is ok. if the power is off, the main tlb will be reset while M4U power on, thus the tlb flush while m4u power off is unnecessary, just skip it. Therefore, we increase the ref_count for pm when pm status is ACTIVE, otherwise, skip it. Meanwhile, the tlb_flush_range is called so often, thus, update pm ref_count while the SoC has power-domain to avoid touch the dev->power.lock. and the tlb_flush_all only is called when boot, so no need check if the SoC has power-domain to keep code clean. There will be one case that pm runctime status is not expected when tlb flush. After boot, the display may call dma_alloc_attrs before it call pm_runtime_get(disp-dev), then the m4u's pm status is not active inside the dma_alloc_attrs. Since it only happens after boot, the tlb is clean at that time, I also think this is ok. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-21-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Add pm runtime callbackYong Wu
In pm runtime case, all the registers backup/restore and bclk are controlled in the pm_runtime callback, Rename the original suspend/resume to the runtime_suspend/resume. Use pm_runtime_force_suspend/resume as the normal suspend/resume. iommu should suspend after iommu consumer devices, thus use _LATE_. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-20-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Add device link for smi-common and m4uYong Wu
In the lastest SoC, M4U has its special power domain. thus, If the engine begin to work, it should help enable the power for M4U firstly. Currently if the engine work, it always enable the power/clocks for smi-larbs/smi-common. This patch adds device_link for smi-common and M4U. then, if smi-common power is enabled, the M4U power also is powered on automatically. Normally M4U connect with several smi-larbs and their smi-common always are the same, In this patch it get smi-common dev from the last smi-larb device, then add the device_link. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-19-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Add error handle for mtk_iommu_probeYong Wu
In the original code, we lack the error handle. This patch adds them. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-18-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Move hw_init into attach_deviceYong Wu
In attach device, it will update the pagetable base address register. Move the hw_init function also here. Then it only need call pm_runtime_get/put one time here if m4u has power domain. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-17-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Update oas for v7sYong Wu
This patch only updates oas in different SoCs. If the SoC supports 4GB-mode and current dram size is 4GB, the oas is 33. otherwise, it's still 32. In the lastest SoC, the oas is 35bits. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-16-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Add a flag for iova 34bits caseYong Wu
Add a HW flag for if the HW support 34bit IOVA. the previous SoC still use 32bit. normally the lvl1 pgtable size is 16KB when ias == 32. if ias == 34, lvl1 pgtable size is 16KB * 4. The purpose of this patch is to save 16KB*3 continuous memory for the previous SoC. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-15-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/io-pgtable-arm-v7s: Quad lvl1 pgtable for MediaTekYong Wu
The standard input iova bits is 32. MediaTek quad the lvl1 pagetable (4 * lvl1). No change for lvl2 pagetable. Then the iova bits can reach 34bit. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-14-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/io-pgtable-arm-v7s: Add cfg as a param in some macrosYong Wu
Add "cfg" as a parameter for some macros. This is a preparing patch for mediatek extend the lvl1 pgtable. No functional change. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-13-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/io-pgtable-arm-v7s: Clarify LVL_SHIFT/BITS macroYong Wu
The current _ARM_V7S_LVL_BITS/ARM_V7S_LVL_SHIFT use a formula to calculate the corresponding value for level1 and level2 to pretend the code sane. Actually their level1 and level2 values are different from each other. This patch only clarify the two macro. No functional change. Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-12-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/io-pgtable-arm-v7s: Extend PA34 for MediaTekYong Wu
MediaTek extend the bit5 in lvl1 and lvl2 descriptor as PA34. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-11-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/io-pgtable-arm-v7s: Use ias to check the valid iova in unmapYong Wu
Use the ias for the valid iova checking in arm_v7s_unmap. This is a preparing patch for supporting iova 34bit for MediaTek. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-10-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01iommu/mediatek: Use the common mtk-memory-port.hYong Wu
Use the common memory header(larb-port) in the source code. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Acked-by: Krzysztof Kozlowski <krzk@kernel.org> Reviewed-by: Tomasz Figa <tfiga@chromium.org> Link: https://lore.kernel.org/r/20210111111914.22211-9-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-29iommu/ipmmu-vmsa: Allow SDHI devicesYoshihiro Shimoda
Add SDHI devices into devices_allowlist. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Link: https://lore.kernel.org/r/1611838980-4940-3-git-send-email-yoshihiro.shimoda.uh@renesas.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-29iommu/ipmmu-vmsa: Refactor ipmmu_of_xlate()Yoshihiro Shimoda
Refactor ipmmu_of_xlate() to improve readability/scalability. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Link: https://lore.kernel.org/r/1611838980-4940-2-git-send-email-yoshihiro.shimoda.uh@renesas.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-29iommu/vt-d: Use INVALID response code instead of FAILURELu Baolu
The VT-d IOMMU response RESPONSE_FAILURE for a page request in below cases: - When it gets a Page_Request with no PASID; - When it gets a Page_Request with PASID that is not in use for this device. This is allowed by the spec, but IOMMU driver doesn't support such cases today. When the device receives RESPONSE_FAILURE, it sends the device state machine to HALT state. Now if we try to unload the driver, it hangs since the device doesn't send any outbound transactions to host when the driver is trying to clear things up. The only possible responses would be for invalidation requests. Let's use RESPONSE_INVALID instead for now, so that the device state machine doesn't enter HALT state. Suggested-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210126080730.2232859-3-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-29iommu/vt-d: Clear PRQ overflow only when PRQ is emptyLu Baolu
It is incorrect to always clear PRO when it's set w/o first checking whether the overflow condition has been cleared. Current code assumes that if an overflow condition occurs it must have been cleared by earlier loop. However since the code runs in a threaded context, the overflow condition could occur even after setting the head to the tail under some extreme condition. To be sane, we should read both head/tail again when seeing a pending PRO and only clear PRO after all pending PRs have been handled. Suggested-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/linux-iommu/MWHPR11MB18862D2EA5BD432BF22D99A48CA09@MWHPR11MB1886.namprd11.prod.outlook.com/ Link: https://lore.kernel.org/r/20210126080730.2232859-2-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/io-pgtable: Remove TLBI_ON_MAP quirkRobin Murphy
IO_PGTABLE_QUIRK_TLBI_ON_MAP is now fully superseded by the core API's iotlb_sync_map callback. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/5abb80bba3a7c371d5ffb7e59c05586deddb9a91.1611764372.git.robin.murphy@arm.com [will: Remove unused 'iop' local variable from arm_v7s_map()] Signed-off-by: Will Deacon <will@kernel.org>
2021-01-28iommu/msm: Hook up iotlb_sync_mapRobin Murphy
The core API can now accommodate invalidate-on-map style behaviour in a single efficient call, so hook that up instead of having io-pgatble do it piecemeal. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/e95223a0abf129230a0bec6743f837075f0a2fcb.1611764372.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-28iommu/amd: Adopt IO page table framework for AMD IOMMU v1 page tableSuravee Suthikulpanit
Switch to using IO page table framework for AMD IOMMU v1 page table. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-14-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Introduce iommu_v1_map_page and iommu_v1_unmap_pageSuravee Suthikulpanit
These implement map and unmap for AMD IOMMU v1 pagetable, which will be used by the IO pagetable framework. Also clean up unused extern function declarations. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-13-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Introduce iommu_v1_iova_to_physSuravee Suthikulpanit
This implements iova_to_phys for AMD IOMMU v1 pagetable, which will be used by the IO page table framework. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-12-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Refactor fetch_pte to use struct amd_io_pgtableSuravee Suthikulpanit
To simplify the fetch_pte function. There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-11-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Rename variables to be consistent with struct io_pgtable_opsSuravee Suthikulpanit
There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-10-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Remove amd_iommu_domain_get_pgtableSuravee Suthikulpanit
Since the IO page table root and mode parameters have been moved into the struct amd_io_pg, the function is no longer needed. Therefore, remove it along with the struct domain_pgtable. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-9-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Restructure code for freeing page tableSuravee Suthikulpanit
By consolidate logic into v1_free_pgtable helper function, which is called from IO page table framework. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-8-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Move IO page table related functionsSuravee Suthikulpanit
Preparing to migrate to use IO page table framework. There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-7-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Declare functions as externSuravee Suthikulpanit
And move declaration to header file so that they can be included across multiple files. There is no functional change. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-6-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Convert to using amd_io_pgtableSuravee Suthikulpanit
Make use of the new struct amd_io_pgtable in preparation to remove the struct domain_pgtable. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-5-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Move pt_root to struct amd_io_pgtableSuravee Suthikulpanit
To better organize the data structure since it contains IO page table related information. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-4-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Prepare for generic IO page table frameworkSuravee Suthikulpanit
Add initial hook up code to implement generic IO page table framework. Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Link: https://lore.kernel.org/r/20201215073705.123786-3-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/vt-d: Do not use flush-queue when caching-mode is onNadav Amit
When an Intel IOMMU is virtualized, and a physical device is passed-through to the VM, changes of the virtual IOMMU need to be propagated to the physical IOMMU. The hypervisor therefore needs to monitor PTE mappings in the IOMMU page-tables. Intel specifications provide "caching-mode" capability that a virtual IOMMU uses to report that the IOMMU is virtualized and a TLB flush is needed after mapping to allow the hypervisor to propagate virtual IOMMU mappings to the physical IOMMU. To the best of my knowledge no real physical IOMMU reports "caching-mode" as turned on. Synchronizing the virtual and the physical IOMMU tables is expensive if the hypervisor is unaware which PTEs have changed, as the hypervisor is required to walk all the virtualized tables and look for changes. Consequently, domain flushes are much more expensive than page-specific flushes on virtualized IOMMUs with passthrough devices. The kernel therefore exploited the "caching-mode" indication to avoid domain flushing and use page-specific flushing in virtualized environments. See commit 78d5f0f500e6 ("intel-iommu: Avoid global flushes with caching mode.") This behavior changed after commit 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing"). Now, when batched TLB flushing is used (the default), full TLB domain flushes are performed frequently, requiring the hypervisor to perform expensive synchronization between the virtual TLB and the physical one. Getting batched TLB flushes to use page-specific invalidations again in such circumstances is not easy, since the TLB invalidation scheme assumes that "full" domain TLB flushes are performed for scalability. Disable batched TLB flushes when caching-mode is on, as the performance benefit from using batched TLB invalidations is likely to be much smaller than the overhead of the virtual-to-physical IOMMU page-tables synchronization. Fixes: 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing") Signed-off-by: Nadav Amit <namit@vmware.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: stable@vger.kernel.org Acked-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210127175317.1600473-1-namit@vmware.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu: use the __iommu_attach_device() directly for deferred attachLianbo Jiang
Currently, because domain attach allows to be deferred from iommu driver to device driver, and when iommu initializes, the devices on the bus will be scanned and the default groups will be allocated. Due to the above changes, some devices could be added to the same group as below: [ 3.859417] pci 0000:01:00.0: Adding to iommu group 16 [ 3.864572] pci 0000:01:00.1: Adding to iommu group 16 [ 3.869738] pci 0000:02:00.0: Adding to iommu group 17 [ 3.874892] pci 0000:02:00.1: Adding to iommu group 17 But when attaching these devices, it doesn't allow that a group has more than one device, otherwise it will return an error. This conflicts with the deferred attaching. Unfortunately, it has two devices in the same group for my side, for example: [ 9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0 [ 9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1 ... [ 10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0 [ 10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1 Finally, which caused the failure of tg3 driver when tg3 driver calls the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma(). [ 9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting [ 9.754085] tg3: probe of 0000:01:00.0 failed with error -12 [ 9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting [ 10.043053] tg3: probe of 0000:01:00.1 failed with error -12 [ 10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting [ 10.334070] tg3: probe of 0000:02:00.0 failed with error -12 [ 10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting [ 10.622629] tg3: probe of 0000:02:00.1 failed with error -12 In addition, the similar situations also occur in other drivers such as the bnxt_en driver. That can be reproduced easily in kdump kernel when SME is active. Let's move the handling currently in iommu_dma_deferred_attach() into the iommu core code so that it can call the __iommu_attach_device() directly instead of the iommu_attach_device(). The external interface iommu_attach_device() is not suitable for handling this situation. Signed-off-by: Lianbo Jiang <lijiang@redhat.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28dma-iommu: use static-key to minimize the impact in the fast-pathLianbo Jiang
Let's move out the is_kdump_kernel() check from iommu_dma_deferred_attach() to iommu_dma_init(), and use the static-key in the fast-path to minimize the impact in the normal case. Co-developed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Lianbo Jiang <lijiang@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210126115337.20068-2-lijiang@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/vt-d: Correctly check addr alignment in qi_flush_dev_iotlb_pasid()Lu Baolu
An incorrect address mask is being used in the qi_flush_dev_iotlb_pasid() to check the address alignment. This leads to a lot of spurious kernel warnings: [ 485.837093] DMAR: Invalidate non-aligned address 7f76f47f9000, order 0 [ 485.837098] DMAR: Invalidate non-aligned address 7f76f47f9000, order 0 [ 492.494145] qi_flush_dev_iotlb_pasid: 5734 callbacks suppressed [ 492.494147] DMAR: Invalidate non-aligned address 7f7728800000, order 11 [ 492.508965] DMAR: Invalidate non-aligned address 7f7728800000, order 11 Fix it by checking the alignment in right way. Fixes: 288d08e780088 ("iommu/vt-d: Handle non-page aligned address") Reported-and-tested-by: Guo Kaijie <Kaijie.Guo@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Cc: Liu Yi L <yi.l.liu@intel.com> Link: https://lore.kernel.org/r/20210119043500.1539596-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/amd: Use IVHD EFR for early initialization of IOMMU featuresSuravee Suthikulpanit
IOMMU Extended Feature Register (EFR) is used to communicate the supported features for each IOMMU to the IOMMU driver. This is normally read from the PCI MMIO register offset 0x30, and used by the iommu_feature() helper function. However, there are certain scenarios where the information is needed prior to PCI initialization, and the iommu_feature() function is used prematurely w/o warning. This has caused incorrect initialization of IOMMU. This is the case for the commit 6d39bdee238f ("iommu/amd: Enforce 4k mapping for certain IOMMU data structures") Since, the EFR is also available in the IVHD header, and is available to the driver prior to PCI initialization. Therefore, default to using the IVHD EFR instead. Fixes: 6d39bdee238f ("iommu/amd: Enforce 4k mapping for certain IOMMU data structures") Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Tested-by: Brijesh Singh <brijesh.singh@amd.com> Reviewed-by: Robert Richter <rrichter@amd.com> Link: https://lore.kernel.org/r/20210120135002.2682-1-suravee.suthikulpanit@amd.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/vt-d: Preset Access/Dirty bits for IOVA over FLLu Baolu
The Access/Dirty bits in the first level page table entry will be set whenever a page table entry was used for address translation or write permission was successfully translated. This is always true when using the first-level page table for kernel IOVA. Instead of wasting hardware cycles to update the certain bits, it's better to set them up at the beginning. Suggested-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210115004202.953965-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/vt-d: Add qi_submit trace eventLu Baolu
This adds a new trace event to track the submissions of requests to the invalidation queue. This event will provide the information like: - IOMMU name - Invalidation type - Descriptor raw data A sample output like: | qi_submit: iotlb_inv dmar1: 0x100e2 0x0 0x0 0x0 | qi_submit: dev_tlb_inv dmar1: 0x1000000003 0x7ffffffffffff001 0x0 0x0 | qi_submit: iotlb_inv dmar2: 0x800f2 0xf9a00005 0x0 0x0 This will be helpful for queued invalidation related debugging. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210114090400.736104-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-28iommu/vt-d: Consolidate duplicate cache invaliation codeLu Baolu
The pasid based IOTLB and devTLB invalidation code is duplicate in several places. Consolidate them by using the common helpers. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210114085021.717041-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-27iommu/mediatek: Remove the tlb-ops for v7sYong Wu
Until now, we have already used the tlb operations from iommu framework, then the tlb operations for v7s can be removed. Correspondingly, Switch the paramenter "cookie" to the internal structure. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-8-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu/mediatek: Gather iova in iommu_unmap to achieve tlb sync onceYong Wu
In current iommu_unmap, this code is: iommu_iotlb_gather_init(&iotlb_gather); ret = __iommu_unmap(domain, iova, size, &iotlb_gather); iommu_iotlb_sync(domain, &iotlb_gather); We could gather the whole iova range in __iommu_unmap, and then do tlb synchronization in the iommu_iotlb_sync. This patch implement this, Gather the range in mtk_iommu_unmap. then iommu_iotlb_sync call tlb synchronization for the gathered iova range. we don't call iommu_iotlb_gather_add_page since our tlb synchronization could be regardless of granule size. In this way, gather->start is impossible ULONG_MAX, remove the checking. This patch aims to do tlb synchronization *once* in the iommu_unmap. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-7-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu: Switch gather->end to the inclusive endYong Wu
Currently gather->end is "unsigned long" which may be overflow in arch32 in the corner case: 0xfff00000 + 0x100000(iova + size). Although it doesn't affect the size(end - start), it affects the checking "gather->end < end" This patch changes this "end" to the real end address (end = start + size - 1). Correspondingly, update the length to "end - start + 1". Fixes: a7d20dc19d9e ("iommu: Introduce struct iommu_iotlb_gather for batching TLB flushes") Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-5-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu/mediatek: Add iotlb_sync_map to sync whole the iova rangeYong Wu
Remove IO_PGTABLE_QUIRK_TLBI_ON_MAP to avoid tlb sync for each a small chunk memory, Use the new iotlb_sync_map to tlb_sync once for whole the iova range of iommu_map. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-4-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu: Add iova and size as parameters in iotlb_sync_mapYong Wu
iotlb_sync_map allow IOMMU drivers tlb sync after completing the whole mapping. This patch adds iova and size as the parameters in it. then the IOMMU driver could flush tlb with the whole range once after iova mapping to improve performance. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-3-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-27iommu: Move iotlb_sync_map out from __iommu_mapYong Wu
In the end of __iommu_map, It alway call iotlb_sync_map. This patch moves iotlb_sync_map out from __iommu_map since it is unnecessary to call this for each sg segment especially iotlb_sync_map is flush tlb all currently. Add a little helper _iommu_map for this. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-2-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>