summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)Author
2021-12-20iommu/iova: Fix race between FQ timeout and teardownXiongfeng Wang
It turns out to be possible for hotplugging out a device to reach the stage of tearing down the device's group and default domain before the domain's flush queue has drained naturally. At this point, it is then possible for the timeout to expire just before the del_timer() call in free_iova_flush_queue(), such that we then proceed to free the FQ resources while fq_flush_timeout() is still accessing them on another CPU. Crashes due to this have been observed in the wild while removing NVMe devices. Close the race window by using del_timer_sync() to safely wait for any active timeout handler to finish before we start to free things. We already avoid any locking in free_iova_flush_queue() since the FQ is supposed to be inactive anyway, so the potential deadlock scenario does not apply. Fixes: 9a005a800ae8 ("iommu/iova: Add flush timer") Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com> [ rm: rewrite commit message ] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/0a365e5b07f14b7344677ad6a9a734966a8422ce.1639753638.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-20iommu/amd: Fix typo in *glues … together* in commentPaul Menzel
Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de> Link: https://lore.kernel.org/r/20211217134916.43698-1-pmenzel@molgen.mpg.de Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/vt-d: Remove unused dma_to_mm_pfn functionMaíra Canal
Remove dma_to_buf_pfn function, which is not used in the codebase. This was pointed by clang with the following warning: 'dma_to_mm_pfn' [-Wunused-function] static inline unsigned long dma_to_mm_pfn(unsigned long dma_pfn) ^ https://lore.kernel.org/r/YYhY7GqlrcTZlzuA@fedora drivers/iommu/intel/iommu.c:136:29: warning: unused function Signed-off-by: Maíra Canal <maira.canal@usp.br> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20211217083817.1745419-4-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/vt-d: Drop duplicate check in dma_pte_free_pagetable()Kefeng Wang
The BUG_ON check exists in dma_pte_clear_range(), kill the duplicate check. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20211025032307.182974-1-wangkefeng.wang@huawei.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20211217083817.1745419-3-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/vt-d: Use bitmap_zalloc() when applicableChristophe JAILLET
'iommu->domain_ids' is a bitmap. So use 'bitmap_zalloc()' to simplify code and improve the semantic. Also change the corresponding 'kfree()' into 'bitmap_free()' to keep consistency. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/cb7a3e0a8d522447a06298a4f244c3df072f948b.1635018498.git.christophe.jaillet@wanadoo.fr Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20211217083817.1745419-2-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/amd: Remove useless irq affinity notifierMaxim Levitsky
iommu->intcapxt_notify field is no longer used after a switch to a separate domain was done Fixes: d1adcfbb520c ("iommu/amd: Fix IOMMU interrupt generation in X2APIC mode") Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20211123161038.48009-6-mlevitsk@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/amd: X2apic mode: mask/unmask interrupts on suspend/resumeMaxim Levitsky
Use IRQCHIP_MASK_ON_SUSPEND to make the core irq code to mask the iommu interrupt on suspend and unmask it on the resume. Since now the unmask function updates the INTX settings, that will restore them on resume from s3/s4. Since IRQCHIP_MASK_ON_SUSPEND is only effective for interrupts which are not wakeup sources, remove IRQCHIP_SKIP_SET_WAKE flag and instead implement a dummy .irq_set_wake which doesn't allow the interrupt to become a wakeup source. Fixes: 66929812955bb ("iommu/amd: Add support for X2APIC IOMMU interrupts") Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20211123161038.48009-5-mlevitsk@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/amd: X2apic mode: setup the INTX registers on mask/unmaskMaxim Levitsky
This is more logically correct and will also allow us to to use mask/unmask logic to restore INTX setttings after the resume from s3/s4. Fixes: 66929812955bb ("iommu/amd: Add support for X2APIC IOMMU interrupts") Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20211123161038.48009-4-mlevitsk@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/amd: X2apic mode: re-enable after resumeMaxim Levitsky
Otherwise it is guaranteed to not work after the resume... Fixes: 66929812955bb ("iommu/amd: Add support for X2APIC IOMMU interrupts") Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20211123161038.48009-3-mlevitsk@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/amd: Restore GA log/tail pointer on host resumeMaxim Levitsky
This will give IOMMU GA log a chance to work after resume from s3/s4. Fixes: 8bda0cfbdc1a6 ("iommu/amd: Detect and initialize guest vAPIC log") Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20211123161038.48009-2-mlevitsk@redhat.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/iova: Move fast alloc size roundup into alloc_iova_fast()John Garry via iommu
It really is a property of the IOVA rcache code that we need to alloc a power-of-2 size, so relocate the functionality to resize into alloc_iova_fast(), rather than the callsites. Signed-off-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Xie Yongji <xieyongji@bytedance.com> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/1638875846-23993-1-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/virtio: Fix typo in a commentXiang wangx
The double `as' in a comment is repeated, thus it should be removed. Signed-off-by: Xiang wangx <wangxiang@cdjrlc.com> Link: https://lore.kernel.org/r/20211216083302.18049-1-wangxiang@cdjrlc.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-17iommu/vt-d: Use correctly sized arguments for bit fieldKees Cook
The find.h APIs are designed to be used only on unsigned long arguments. This can technically result in a over-read, but it is harmless in this case. Regardless, fix it to avoid the warning seen under -Warray-bounds, which we'd like to enable globally: In file included from ./include/linux/bitmap.h:9, from drivers/iommu/intel/iommu.c:17: drivers/iommu/intel/iommu.c: In function 'domain_context_mapping_one': ./include/linux/find.h:119:37: warning: array subscript 'long unsigned int[0]' is partly outside array bounds of 'int[1]' [-Warray-bounds] 119 | unsigned long val = *addr & GENMASK(size - 1, 0); | ^~~~~ drivers/iommu/intel/iommu.c:2115:18: note: while referencing 'max_pde' 2115 | int pds, max_pde; | ^~~~~~~ Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Yury Norov <yury.norov@gmail.com> Link: https://lore.kernel.org/r/20211215232432.2069605-1-keescook@chromium.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-16iommu/arm-smmu-v3: Use msi_get_virq()Thomas Gleixner
Let the core code fiddle with the MSI descriptor retrieval. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20211210221815.089008198@linutronix.de
2021-12-16platform-msi: Use msi_desc::msi_indexThomas Gleixner
Use the common msi_index member and get rid of the pointless wrapper struct. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nishanth Menon <nm@ti.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20211210221814.413638645@linutronix.de
2021-12-16device: Move MSI related data into a structThomas Gleixner
The only unconditional part of MSI data in struct device is the irqdomain pointer. Everything else can be allocated on demand. Create a data structure and move the irqdomain pointer into it. The other MSI specific parts are going to be removed from struct device in later steps. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michael Kelley <mikelley@microsoft.com> Tested-by: Nishanth Menon <nm@ti.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20211210221813.617178827@linutronix.de
2021-12-15Revert "iommu/arm-smmu-v3: Decrease the queue size of evtq and priq"Zhou Wang
The commit f115f3c0d5d8 ("iommu/arm-smmu-v3: Decrease the queue size of evtq and priq") decreases evtq and priq, which may lead evtq/priq to be full with fault events, e.g HiSilicon ZIP/SEC/HPRE have maximum 1024 queues in one device, every queue could be binded with one process and trigger a fault event. So let's revert f115f3c0d5d8. In fact, if an implementation of SMMU really does not need so long evtq and priq, value of IDR1_EVTQS and IDR1_PRIQS can be set to proper ones. Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com> Acked-by: Zhen Lei <thunder.leizhen@huawei.com> Link: https://lore.kernel.org/r/1638858768-9971-1-git-send-email-wangzhou1@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2021-12-14iommu/io-pgtable-arm-v7s: Add error handle for page table allocation failureYunfei Wang
In __arm_v7s_alloc_table function: iommu call kmem_cache_alloc to allocate page table, this function allocate memory may fail, when kmem_cache_alloc fails to allocate table, call virt_to_phys will be abnomal and return unexpected phys and goto out_free, then call kmem_cache_free to release table will trigger KE, __get_free_pages and free_pages have similar problem, so add error handle for page table allocation failure. Fixes: 29859aeb8a6e ("iommu/io-pgtable-arm-v7s: Abort allocation when table address overflows the PTE") Signed-off-by: Yunfei Wang <yf.wang@mediatek.com> Cc: <stable@vger.kernel.org> # 5.10.* Acked-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20211207113315.29109-1-yf.wang@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-12-14iommu/arm-smmu-v3: Constify arm_smmu_mmu_notifier_opsRikard Falkeborn
The only usage of arm_smmu_mmu_notifier_ops is to assign its address to the ops field in the mmu_notifier struct, which is a pointer to const struct mmu_notifier_ops. Make it const to allow the compiler to put it in read-only memory. Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20211204223301.100649-1-rikard.falkeborn@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2021-12-14iommu: arm-smmu-impl: Add SM8450 qcom iommu implementationVinod Koul
Add SM8450 qcom iommu implementation to the table of qcom_smmu_impl_of_match table which brings in iommu support for SM8450 SoC Signed-off-by: Vinod Koul <vkoul@kernel.org> Tested-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Acked-by: Konrad Dybcio <konrad.dybcio@somainline.org> Link: https://lore.kernel.org/r/20211201073943.3969549-3-vkoul@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-12-14iommu/arm-smmu-qcom: Fix TTBR0 readRob Clark
It is a 64b register, lets not lose the upper bits. Fixes: ab5df7b953d8 ("iommu/arm-smmu-qcom: Add an adreno-smmu-priv callback to get pagefault info") Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Link: https://lore.kernel.org/r/20211108171724.470973-1-robdclark@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2021-12-06iommu/virtio: Support identity-mapped domainsJean-Philippe Brucker
Support identity domains for devices that do not offer the VIRTIO_IOMMU_F_BYPASS_CONFIG feature, by creating 1:1 mappings between the virtual and physical address space. Identity domains created this way still perform noticeably better than DMA domains, because they don't have the overhead of setting up and tearing down mappings at runtime. The performance difference between this and bypass is minimal in comparison. It does not matter that the physical addresses in the identity mappings do not all correspond to memory. By enabling passthrough we are trusting the device driver and the device itself to only perform DMA to suitable locations. In some cases it may even be desirable to perform DMA to MMIO regions. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20211201173323.1045819-6-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-06iommu/virtio: Pass end address to viommu_add_mapping()Jean-Philippe Brucker
To support identity mappings, the virtio-iommu driver must be able to represent full 64-bit ranges internally. Pass (start, end) instead of (start, size) to viommu_add/del_mapping(). Clean comments. The one about the returned size was never true: when sweeping the whole address space the returned size will most certainly be smaller than 2^64. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20211201173323.1045819-5-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-06iommu/virtio: Sort reserved regionsJean-Philippe Brucker
To ease identity mapping support, keep the list of reserved regions sorted. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20211201173323.1045819-4-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-06iommu/virtio: Support bypass domainsJean-Philippe Brucker
The VIRTIO_IOMMU_F_BYPASS_CONFIG feature adds a new flag to the ATTACH request, that creates a bypass domain. Use it to enable identity domains. When VIRTIO_IOMMU_F_BYPASS_CONFIG is not supported by the device, we currently fail attaching to an identity domain. Future patches will instead create identity mappings in this case. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20211201173323.1045819-3-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-06s390/pci: use physical addresses in DMA tablesNiklas Schnelle
The entries in the DMA translation tables for our IOMMU must specify physical addresses of either the next level table or the final page to be mapped for DMA. Currently however the code simply passes the virtual addresses of both. On the other hand we still need to walk the tables via their virtual addresses so we need to do a phys_to_virt() when setting the entries and a virt_to_phys() when getting them. Similarly when passing the I/O translation anchor to the hardware we must also specify its physical address. As the DMA and IOMMU APIs we are implementing already use the correct phys_addr_t type for the address to be mapped let's also thread this through instead of treating it as just an unsigned long. Note: this currently doesn't fix a real bug, since virtual addresses are indentical to physical ones. Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-12-06iommu/io-pgtable-arm: Fix table descriptor paddr formattingHector Martin
Table descriptors were being installed without properly formatting the address using paddr_to_iopte, which does not match up with the iopte_deref in __arm_lpae_map. This is incorrect for the LPAE pte format, as it does not handle the high bits properly. This was found on Apple T6000 DARTs, which require a new pte format (different shift); adding support for that to paddr_to_iopte/iopte_to_paddr caused it to break badly, as even <48-bit addresses would end up incorrect in that case. Fixes: 6c89928ff7a0 ("iommu/io-pgtable-arm: Support 52-bit physical address") Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Hector Martin <marcan@marcan.st> Link: https://lore.kernel.org/r/20211120031343.88034-1-marcan@marcan.st Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-12-06iommu: Extend mutex lock scope in iommu_probe_device()Lu Baolu
Extend the scope of holding group->mutex so that it can cover the default domain check/attachment and direct mappings of reserved regions. Cc: Ashish Mhetre <amhetre@nvidia.com> Fixes: 211ff31b3d33b ("iommu: Fix race condition during default domain allocation") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20211108061349.1985579-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-11-26iommu/vt-d: Fix unmap_pages supportAlex Williamson
When supporting only the .map and .unmap callbacks of iommu_ops, the IOMMU driver can make assumptions about the size and alignment used for mappings based on the driver provided pgsize_bitmap. VT-d previously used essentially PAGE_MASK for this bitmap as any power of two mapping was acceptably filled by native page sizes. However, with the .map_pages and .unmap_pages interface we're now getting page-size and count arguments. If we simply combine these as (page-size * count) and make use of the previous map/unmap functions internally, any size and alignment assumptions are very different. As an example, a given vfio device assignment VM will often create a 4MB mapping at IOVA pfn [0x3fe00 - 0x401ff]. On a system that does not support IOMMU super pages, the unmap_pages interface will ask to unmap 1024 4KB pages at the base IOVA. dma_pte_clear_level() will recurse down to level 2 of the page table where the first half of the pfn range exactly matches the entire pte level. We clear the pte, increment the pfn by the level size, but (oops) the next pte is on a new page, so we exit the loop an pop back up a level. When we then update the pfn based on that higher level, we seem to assume that the previous pfn value was at the start of the level. In this case the level size is 256K pfns, which we add to the base pfn and get a results of 0x7fe00, which is clearly greater than 0x401ff, so we're done. Meanwhile we never cleared the ptes for the remainder of the range. When the VM remaps this range, we're overwriting valid ptes and the VT-d driver complains loudly, as reported by the user report linked below. The fix for this seems relatively simple, if each iteration of the loop in dma_pte_clear_level() is assumed to clear to the end of the level pte page, then our next pfn should be calculated from level_pfn rather than our working pfn. Fixes: 3f34f1259776 ("iommu/vt-d: Implement map/unmap_pages() iommu_ops callback") Reported-by: Ajay Garg <ajaygargnsit@gmail.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Tested-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Link: https://lore.kernel.org/all/20211002124012.18186-1-ajaygargnsit@gmail.com/ Link: https://lore.kernel.org/r/163659074748.1617923.12716161410774184024.stgit@omen Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20211126135556.397932-3-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-11-26iommu/vt-d: Fix an unbalanced rcu_read_lock/rcu_read_unlock()Christophe JAILLET
If we return -EOPNOTSUPP, the rcu lock remains lock. This is spurious. Go through the end of the function instead. This way, the missing 'rcu_read_unlock()' is called. Fixes: 7afd7f6aa21a ("iommu/vt-d: Check FL and SL capability sanity in scalable mode") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/40cc077ca5f543614eab2a10e84d29dd190273f6.1636217517.git.christophe.jaillet@wanadoo.fr Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20211126135556.397932-2-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-11-26iommu/rockchip: Fix PAGE_DESC_HI_MASKs for RK3568Alex Bee
With the submission of iommu driver for RK3568 a subtle bug was introduced: PAGE_DESC_HI_MASK1 and PAGE_DESC_HI_MASK2 have to be the other way arround - that leads to random errors, especially when addresses beyond 32 bit are used. Fix it. Fixes: c55356c534aa ("iommu: rockchip: Add support for iommu v2") Signed-off-by: Alex Bee <knaerzche@gmail.com> Tested-by: Peter Geis <pgwipeout@gmail.com> Reviewed-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Dan Johansen <strit@manjaro.org> Reviewed-by: Benjamin Gaignard <benjamin.gaignard@collabora.com> Link: https://lore.kernel.org/r/20211124021325.858139-1-knaerzche@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-11-26iommu/amd: Clarify AMD IOMMUv2 initialization messagesJoerg Roedel
The messages printed on the initialization of the AMD IOMMUv2 driver have caused some confusion in the past. Clarify the messages to lower the confusion in the future. Cc: stable@vger.kernel.org Signed-off-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20211123105507.7654-3-joro@8bytes.org
2021-11-06Merge tag 'pci-v5.16-changes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci Pull pci updates from Bjorn Helgaas: "Enumeration: - Conserve IRQs by setting up portdrv IRQs only when there are users (Jan Kiszka) - Rework and simplify _OSC negotiation for control of PCIe features (Joerg Roedel) - Remove struct pci_dev.driver pointer since it's redundant with the struct device.driver pointer (Uwe Kleine-König) Resource management: - Coalesce contiguous host bridge apertures from _CRS to accommodate BARs that cover more than one aperture (Kai-Heng Feng) Sysfs: - Check CAP_SYS_ADMIN before parsing user input (Krzysztof Wilczyński) - Return -EINVAL consistently from "store" functions (Krzysztof Wilczyński) - Use sysfs_emit() in endpoint "show" functions to avoid buffer overruns (Kunihiko Hayashi) PCIe native device hotplug: - Ignore Link Down/Up caused by resets during error recovery so endpoint drivers can remain bound to the device (Lukas Wunner) Virtualization: - Avoid bus resets on Atheros QCA6174, where they hang the device (Ingmar Klein) - Work around Pericom PI7C9X2G switch packet drop erratum by using store and forward mode instead of cut-through (Nathan Rossi) - Avoid trying to enable AtomicOps on VFs; the PF setting applies to all VFs (Selvin Xavier) MSI: - Document that /sys/bus/pci/devices/.../irq contains the legacy INTx interrupt or the IRQ of the first MSI (not MSI-X) vector (Barry Song) VPD: - Add pci_read_vpd_any() and pci_write_vpd_any() to access anywhere in the possible VPD space; use these to simplify the cxgb3 driver (Heiner Kallweit) Peer-to-peer DMA: - Add (not subtract) the bus offset when calculating DMA address (Wang Lu) ASPM: - Re-enable LTR at Downstream Ports so they don't report Unsupported Requests when reset or hot-added devices send LTR messages (Mingchuang Qiao) Apple PCIe controller driver: - Add driver for Apple M1 PCIe controller (Alyssa Rosenzweig, Marc Zyngier) Cadence PCIe controller driver: - Return success when probe succeeds instead of falling into error path (Li Chen) HiSilicon Kirin PCIe controller driver: - Reorganize PHY logic and add support for external PHY drivers (Mauro Carvalho Chehab) - Support PERST# GPIOs for HiKey970 external PEX 8606 bridge (Mauro Carvalho Chehab) - Add Kirin 970 support (Mauro Carvalho Chehab) - Make driver removable (Mauro Carvalho Chehab) Intel VMD host bridge driver: - If IOMMU supports interrupt remapping, leave VMD MSI-X remapping enabled (Adrian Huang) - Number each controller so we can tell them apart in /proc/interrupts (Chunguang Xu) - Avoid building on UML because VMD depends on x86 bare metal APIs (Johannes Berg) Marvell Aardvark PCIe controller driver: - Define macros for PCI_EXP_DEVCTL_PAYLOAD_* (Pali Rohár) - Set Max Payload Size to 512 bytes per Marvell spec (Pali Rohár) - Downgrade PIO Response Status messages to debug level (Marek Behún) - Preserve CRS SV (Config Request Retry Software Visibility) bit in emulated Root Control register (Pali Rohár) - Fix issue in configuring reference clock (Pali Rohár) - Don't clear status bits for masked interrupts (Pali Rohár) - Don't mask unused interrupts (Pali Rohár) - Avoid code repetition in advk_pcie_rd_conf() (Marek Behún) - Retry config accesses on CRS response (Pali Rohár) - Simplify emulated Root Capabilities initialization (Pali Rohár) - Fix several link training issues (Pali Rohár) - Fix link-up checking via LTSSM (Pali Rohár) - Fix reporting of Data Link Layer Link Active (Pali Rohár) - Fix emulation of W1C bits (Marek Behún) - Fix MSI domain .alloc() method to return zero on success (Marek Behún) - Read entire 16-bit MSI vector in MSI handler, not just low 8 bits (Marek Behún) - Clear Root Port I/O Space, Memory Space, and Bus Master Enable bits at startup; PCI core will set those as necessary (Pali Rohár) - When operating as a Root Port, set class code to "PCI Bridge" instead of the default "Mass Storage Controller" (Pali Rohár) - Add emulation for PCI_BRIDGE_CTL_BUS_RESET since aardvark doesn't implement this per spec (Pali Rohár) - Add emulation of option ROM BAR since aardvark doesn't implement this per spec (Pali Rohár) MediaTek MT7621 PCIe controller driver: - Add MediaTek MT7621 PCIe host controller driver and DT binding (Sergio Paracuellos) Qualcomm PCIe controller driver: - Add SC8180x compatible string (Bjorn Andersson) - Add endpoint controller driver and DT binding (Manivannan Sadhasivam) - Restructure to use of_device_get_match_data() (Prasad Malisetty) - Add SC7280-specific pcie_1_pipe_clk_src handling (Prasad Malisetty) Renesas R-Car PCIe controller driver: - Remove unnecessary includes (Geert Uytterhoeven) Rockchip DesignWare PCIe controller driver: - Add DT binding (Simon Xue) Socionext UniPhier Pro5 controller driver: - Serialize INTx masking/unmasking (Kunihiko Hayashi) Synopsys DesignWare PCIe controller driver: - Run dwc .host_init() method before registering MSI interrupt handler so we can deal with pending interrupts left by bootloader (Bjorn Andersson) - Clean up Kconfig dependencies (Andy Shevchenko) - Export symbols to allow more modular drivers (Luca Ceresoli) TI DRA7xx PCIe controller driver: - Allow host and endpoint drivers to be modules (Luca Ceresoli) - Enable external clock if present (Luca Ceresoli) TI J721E PCIe driver: - Disable PHY when probe fails after initializing it (Christophe JAILLET) MicroSemi Switchtec management driver: - Return error to application when command execution fails because an out-of-band reset has cleared the device BARs, Memory Space Enable, etc (Kelvin Cao) - Fix MRPC error status handling issue (Kelvin Cao) - Mask out other bits when reading of management VEP instance ID (Kelvin Cao) - Return EOPNOTSUPP instead of ENOTSUPP from sysfs show functions (Kelvin Cao) - Add check of event support (Logan Gunthorpe) Miscellaneous: - Remove unused pci_pool wrappers, which have been replaced by dma_pool (Cai Huoqing) - Use 'unsigned int' instead of bare 'unsigned' (Krzysztof Wilczyński) - Use kstrtobool() directly, sans strtobool() wrapper (Krzysztof Wilczyński) - Fix some sscanf(), sprintf() format mismatches (Krzysztof Wilczyński) - Update PCI subsystem information in MAINTAINERS (Krzysztof Wilczyński) - Correct some misspellings (Krzysztof Wilczyński)" * tag 'pci-v5.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (137 commits) PCI: Add ACS quirk for Pericom PI7C9X2G switches PCI: apple: Configure RID to SID mapper on device addition iommu/dart: Exclude MSI doorbell from PCIe device IOVA range PCI: apple: Implement MSI support PCI: apple: Add INTx and per-port interrupt support PCI: kirin: Allow removing the driver PCI: kirin: De-init the dwc driver PCI: kirin: Disable clkreq during poweroff sequence PCI: kirin: Move the power-off code to a common routine PCI: kirin: Add power_off support for Kirin 960 PHY PCI: kirin: Allow building it as a module PCI: kirin: Add MODULE_* macros PCI: kirin: Add Kirin 970 compatible PCI: kirin: Support PERST# GPIOs for HiKey970 external PEX 8606 bridge PCI: apple: Set up reference clocks when probing PCI: apple: Add initial hardware bring-up PCI: of: Allow matching of an interrupt-map local to a PCI device of/irq: Allow matching of an interrupt-map local to an interrupt controller irqdomain: Make of_phandle_args_to_fwspec() generally available PCI: Do not enable AtomicOps on VFs ...
2021-11-04iommu/dart: Exclude MSI doorbell from PCIe device IOVA rangeMarc Zyngier
The MSI doorbell on Apple HW can be any address in the low 4GB range. However, the MSI write is matched by the PCIe block before hitting the iommu. It must thus be excluded from the IOVA range that is assigned to any PCIe device. Link: https://lore.kernel.org/r/20210929163847.2807812-9-maz@kernel.org Tested-by: Alyssa Rosenzweig <alyssa@rosenzweig.io> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Sven Peter <sven@svenpeter.dev>
2021-11-04Merge tag 'iommu-updates-v5.16' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull iommu updates from Joerg Roedel: - Intel IOMMU Updates fro Lu Baolu: - Dump DMAR translation structure when DMA fault occurs - An optimization in the page table manipulation code - Use second level for GPA->HPA translation - Various cleanups - Arm SMMU Updates from Will - Minor optimisations to SMMUv3 command creation and submission - Numerous new compatible string for Qualcomm SMMUv2 implementations - Fixes for the SWIOTLB based implemenation of dma-iommu code for untrusted devices - Add support for r8a779a0 to the Renesas IOMMU driver and DT matching code for r8a77980 - A couple of cleanups and fixes for the Apple DART IOMMU driver - Make use of generic report_iommu_fault() interface in the AMD IOMMU driver - Various smaller fixes and cleanups * tag 'iommu-updates-v5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (35 commits) iommu/dma: Fix incorrect error return on iommu deferred attach iommu/dart: Initialize DART_STREAMS_ENABLE iommu/dma: Use kvcalloc() instead of kvzalloc() iommu/tegra-smmu: Use devm_bitmap_zalloc when applicable iommu/dart: Use kmemdup instead of kzalloc and memcpy iommu/vt-d: Avoid duplicate removing in __domain_mapping() iommu/vt-d: Convert the return type of first_pte_in_page to bool iommu/vt-d: Clean up unused PASID updating functions iommu/vt-d: Delete dev_has_feat callback iommu/vt-d: Use second level for GPA->HPA translation iommu/vt-d: Check FL and SL capability sanity in scalable mode iommu/vt-d: Remove duplicate identity domain flag iommu/vt-d: Dump DMAR translation structure when DMA fault occurs iommu/vt-d: Do not falsely log intel_iommu is unsupported kernel option iommu/arm-smmu-qcom: Request direct mapping for modem device iommu: arm-smmu-qcom: Add compatible for QCM2290 dt-bindings: arm-smmu: Add compatible for QCM2290 SoC iommu/arm-smmu-qcom: Add SM6350 SMMU compatible dt-bindings: arm-smmu: Add compatible for SM6350 SoC iommu/arm-smmu-v3: Properly handle the return value of arm_smmu_cmdq_build_cmd() ...
2021-11-01Merge tag 'overflow-v5.16-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull overflow updates from Kees Cook: "The end goal of the current buffer overflow detection work[0] is to gain full compile-time and run-time coverage of all detectable buffer overflows seen via array indexing or memcpy(), memmove(), and memset(). The str*() family of functions already have full coverage. While much of the work for these changes have been on-going for many releases (i.e. 0-element and 1-element array replacements, as well as avoiding false positives and fixing discovered overflows[1]), this series contains the foundational elements of several related buffer overflow detection improvements by providing new common helpers and FORTIFY_SOURCE changes needed to gain the introspection required for compiler visibility into array sizes. Also included are a handful of already Acked instances using the helpers (or related clean-ups), with many more waiting at the ready to be taken via subsystem-specific trees[2]. The new helpers are: - struct_group() for gaining struct member range introspection - memset_after() and memset_startat() for clearing to the end of structures - DECLARE_FLEX_ARRAY() for using flex arrays in unions or alone in structs Also included is the beginning of the refactoring of FORTIFY_SOURCE to support memcpy() introspection, fix missing and regressed coverage under GCC, and to prepare to fix the currently broken Clang support. Finishing this work is part of the larger series[0], but depends on all the false positives and buffer overflow bug fixes to have landed already and those that depend on this series to land. As part of the FORTIFY_SOURCE refactoring, a set of both a compile-time and run-time tests are added for FORTIFY_SOURCE and the mem*()-family functions respectively. The compile time tests have found a legitimate (though corner-case) bug[6] already. Please note that the appearance of "panic" and "BUG" in the FORTIFY_SOURCE refactoring are the result of relocating existing code, and no new use of those code-paths are expected nor desired. Finally, there are two tree-wide conversions for 0-element arrays and flexible array unions to gain sane compiler introspection coverage that result in no known object code differences. After this series (and the changes that have now landed via netdev and usb), we are very close to finally being able to build with -Warray-bounds and -Wzero-length-bounds. However, due corner cases in GCC[3] and Clang[4], I have not included the last two patches that turn on these options, as I don't want to introduce any known warnings to the build. Hopefully these can be solved soon" Link: https://lore.kernel.org/lkml/20210818060533.3569517-1-keescook@chromium.org/ [0] Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/log/?qt=grep&q=FORTIFY_SOURCE [1] Link: https://lore.kernel.org/lkml/202108220107.3E26FE6C9C@keescook/ [2] Link: https://lore.kernel.org/lkml/3ab153ec-2798-da4c-f7b1-81b0ac8b0c5b@roeck-us.net/ [3] Link: https://bugs.llvm.org/show_bug.cgi?id=51682 [4] Link: https://lore.kernel.org/lkml/202109051257.29B29745C0@keescook/ [5] Link: https://lore.kernel.org/lkml/20211020200039.170424-1-keescook@chromium.org/ [6] * tag 'overflow-v5.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (30 commits) fortify: strlen: Avoid shadowing previous locals compiler-gcc.h: Define __SANITIZE_ADDRESS__ under hwaddress sanitizer treewide: Replace 0-element memcpy() destinations with flexible arrays treewide: Replace open-coded flex arrays in unions stddef: Introduce DECLARE_FLEX_ARRAY() helper btrfs: Use memset_startat() to clear end of struct string.h: Introduce memset_startat() for wiping trailing members and padding xfrm: Use memset_after() to clear padding string.h: Introduce memset_after() for wiping trailing members/padding lib: Introduce CONFIG_MEMCPY_KUNIT_TEST fortify: Add compile-time FORTIFY_SOURCE tests fortify: Allow strlen() and strnlen() to pass compile-time known lengths fortify: Prepare to improve strnlen() and strlen() warnings fortify: Fix dropped strcpy() compile-time write overflow check fortify: Explicitly disable Clang support fortify: Move remaining fortify helpers into fortify-string.h lib/string: Move helper functions out of string.c compiler_types.h: Remove __compiletime_object_size() cm4000_cs: Use struct_group() to zero struct cm4000_dev region can: flexcan: Use struct_group() to zero struct flexcan_regs regions ...
2021-11-01Merge tag 'x86_cc_for_v5.16_rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull generic confidential computing updates from Borislav Petkov: "Add an interface called cc_platform_has() which is supposed to be used by confidential computing solutions to query different aspects of the system. The intent behind it is to unify testing of such aspects instead of having each confidential computing solution add its own set of tests to code paths in the kernel, leading to an unwieldy mess" * tag 'x86_cc_for_v5.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: treewide: Replace the use of mem_encrypt_active() with cc_platform_has() x86/sev: Replace occurrences of sev_es_active() with cc_platform_has() x86/sev: Replace occurrences of sev_active() with cc_platform_has() x86/sme: Replace occurrences of sme_active() with cc_platform_has() powerpc/pseries/svm: Add a powerpc version of cc_platform_has() x86/sev: Add an x86 version of cc_platform_has() arch/cc: Introduce a function to check for confidential computing features x86/ioremap: Selectively build arch override encryption functions
2021-10-31Merge branches 'apple/dart', 'arm/mediatek', 'arm/renesas', 'arm/smmu', ↵Joerg Roedel
'arm/tegra', 'iommu/fixes', 'x86/amd', 'x86/vt-d' and 'core' into next
2021-10-28iommu/dma: Fix incorrect error return on iommu deferred attachLogan Gunthorpe
scsi_dma_map() was reporting a failure during boot on an AMD machine with the IOMMU enabled. scsi_dma_map failed: request for 36 bytes! The issue was tracked down to a mistake in logic: should not return an error if iommu_deferred_attach() returns zero. Reported-by: Marshall Midden <marshallmidden@gmail.com> Fixes: dabb16f67215 ("iommu/dma: return error code from iommu_dma_map_sg()") Link: https://lore.kernel.org/all/CAD2CkAWjS8=kKwEEN4cgVNjyFORUibzEiCUA-X+SMtbo0JoMmA@mail.gmail.com Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211027174757.119755-1-logang@deltatee.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-27iommu/dart: Initialize DART_STREAMS_ENABLESven Peter
DART has an additional global register to control which streams are isolated. This register is a bit redundant since DART_TCR can already be used to control isolation and is usually initialized to DART_STREAM_ALL by the time we get control. Some DARTs (namely the one used for the audio controller) however have some streams disabled initially. Make sure those work by initializing DART_STREAMS_ENABLE during reset. Reported-by: Martin Povišer <povik@protonmail.com> Signed-off-by: Sven Peter <sven@svenpeter.dev> Reviewed-by: Hector Martin <marcan@marcan.st> Link: https://lore.kernel.org/r/20211019162253.45919-1-sven@svenpeter.dev Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-27iommu/dma: Use kvcalloc() instead of kvzalloc()Gustavo A. R. Silva
Use 2-factor argument form kvcalloc() instead of kvzalloc(). Link: https://github.com/KSPP/linux/issues/162 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210928222229.GA280355@embeddedor Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-18iommu/tegra-smmu: Use devm_bitmap_zalloc when applicableChristophe JAILLET
'smmu->asids' is a bitmap. So use 'devm_kzalloc()' to simplify code, improve the semantic of the code and avoid some open-coded arithmetic in allocator arguments. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/2c0f4da80c3b5ef83299c651f69a563034c1c6cb.1632661557.git.christophe.jaillet@wanadoo.fr Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-18iommu/dart: Use kmemdup instead of kzalloc and memcpyWan Jiabing
Fix following coccicheck warning: drivers/iommu/apple-dart.c:704:20-27: WARNING opportunity for kmemdup Signed-off-by: Wan Jiabing <wanjiabing@vivo.com> Acked-by: Sven Peter <sven@svenpeter.dev> Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io> Link: https://lore.kernel.org/r/20211013063441.29888-1-wanjiabing@vivo.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-18iommu/vt-d: Avoid duplicate removing in __domain_mapping()Longpeng(Mike)
The __domain_mapping() always removes the pages in the range from 'iov_pfn' to 'end_pfn', but the 'end_pfn' is always the last pfn of the range that the caller wants to map. This would introduce too many duplicated removing and leads the map operation take too long, for example: Map iova=0x100000,nr_pages=0x7d61800 iov_pfn: 0x100000, end_pfn: 0x7e617ff iov_pfn: 0x140000, end_pfn: 0x7e617ff iov_pfn: 0x180000, end_pfn: 0x7e617ff iov_pfn: 0x1c0000, end_pfn: 0x7e617ff iov_pfn: 0x200000, end_pfn: 0x7e617ff ... it takes about 50ms in total. We can reduce the cost by recalculate the 'end_pfn' and limit it to the boundary of the end of this pte page. Map iova=0x100000,nr_pages=0x7d61800 iov_pfn: 0x100000, end_pfn: 0x13ffff iov_pfn: 0x140000, end_pfn: 0x17ffff iov_pfn: 0x180000, end_pfn: 0x1bffff iov_pfn: 0x1c0000, end_pfn: 0x1fffff iov_pfn: 0x200000, end_pfn: 0x23ffff ... it only need 9ms now. This also removes a meaningless BUG_ON() in __domain_mapping(). Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com> Tested-by: Liujunjie <liujunjie23@huawei.com> Link: https://lore.kernel.org/r/20211008000433.1115-1-longpeng2@huawei.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20211014053839.727419-10-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-18iommu/vt-d: Clean up unused PASID updating functionsFenghua Yu
update_pasid() and its call chain are currently unused in the tree because Thomas disabled the ENQCMD feature. The feature will be re-enabled shortly using a different approach and update_pasid() and its call chain will not be used in the new approach. Remove the useless functions. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/20210920192349.2602141-1-fenghua.yu@intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20211014053839.727419-8-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-18iommu/vt-d: Delete dev_has_feat callbackLu Baolu
The commit 262948f8ba573 ("iommu: Delete iommu_dev_has_feature()") has deleted the iommu_dev_has_feature() interface. Remove the dev_has_feat callback to avoid dead code. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20210929072030.1330225-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20211014053839.727419-7-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-18iommu/vt-d: Use second level for GPA->HPA translationLu Baolu
The IOMMU VT-d implementation uses the first level for GPA->HPA translation by default. Although both the first level and the second level could handle the DMA translation, they're different in some way. For example, the second level translation has separate controls for the Access/Dirty page tracking. With the first level translation, there's no such control. On the other hand, the second level translation has the page-level control for forcing snoop, but the first level only has global control with pasid granularity. This uses the second level for GPA->HPA translation so that we can provide a consistent hardware interface for use cases like dirty page tracking for live migration. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20210926114535.923263-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20211014053839.727419-6-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-18iommu/vt-d: Check FL and SL capability sanity in scalable modeLu Baolu
An iommu domain could be allocated and mapped before it's attached to any device. This requires that in scalable mode, when the domain is allocated, the format (FL or SL) of the page table must be determined. In order to achieve this, the platform should support consistent SL or FL capabilities on all IOMMU's. This adds a check for this and aborts IOMMU probing if it doesn't meet this requirement. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20210926114535.923263-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20211014053839.727419-5-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-18iommu/vt-d: Remove duplicate identity domain flagLu Baolu
The iommu_domain data structure already has the "type" field to keep the type of a domain. It's unnecessary to have the DOMAIN_FLAG_STATIC_IDENTITY flag in the vt-d implementation. This cleans it up with no functionality change. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20210926114535.923263-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20211014053839.727419-4-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-10-18iommu/vt-d: Dump DMAR translation structure when DMA fault occursKyung Min Park
When the dmar translation fault happens, the kernel prints a single line fault reason with corresponding hexadecimal code defined in the Intel VT-d specification. Currently, when user wants to debug the translation fault in detail, debugfs is used for dumping the dmar_translation_struct, which is not available when the kernel failed to boot. Dump the DMAR translation structure, pagewalk the IO page table and print the page table entry when the fault happens. This takes effect only when CONFIG_DMAR_DEBUG is enabled. Signed-off-by: Kyung Min Park <kyung.min.park@intel.com> Link: https://lore.kernel.org/r/20210815203845.31287-1-kyung.min.park@intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20211014053839.727419-3-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>