Age | Commit message (Collapse) | Author |
|
'arm/exynos', 'arm/smmu', 'ppc/pamu', 'x86/vt-d', 'x86/amd' and 'core' into next
|
|
Move AMD Kconfig and Makefile bits down into the amd directory
with the rest of the AMD specific files.
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Link: https://lore.kernel.org/r/20200630200636.48600-3-jsnitsel@redhat.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Move Intel Kconfig and Makefile bits down into intel directory
with the rest of the Intel specific files.
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Link: https://lore.kernel.org/r/20200630200636.48600-2-jsnitsel@redhat.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The VT-d spec requires (10.4.4 Global Command Register, TE field) that:
Hardware implementations supporting DMA draining must drain any in-flight
DMA read/write requests queued within the Root-Complex before completing
the translation enable command and reflecting the status of the command
through the TES field in the Global Status register.
Unfortunately, some integrated graphic devices fail to do so after some
kind of power state transition. As the result, the system might stuck in
iommu_disable_translation(), waiting for the completion of TE transition.
This provides a quirk list for those devices and skips TE disabling if
the qurik hits.
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=208363
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=206571
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Tested-by: Koba Ko <koba.ko@canonical.com>
Tested-by: Jun Miao <jun.miao@windriver.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20200723013437.2268-1-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Now the ARM page tables are always allocated by GFP_ATOMIC parameter,
but the iommu_ops->map() function has been added a gfp_t parameter by
commit 781ca2de89ba ("iommu: Add gfp parameter to iommu_ops::map"),
thus io_pgtable_ops->map() should use the gfp parameter passed from
iommu_ops->map() to allocate page pages, which can avoid wasting the
memory allocators atomic pools for some non-atomic contexts.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/3093df4cb95497aaf713fca623ce4ecebb197c2e.1591930156.git.baolin.wang@linux.alibaba.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Now __iommu_map_sg() is used only in iommu.c file, so mark it static.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/ab722e9970739929738066777b8ee7930e32abd5.1591930156.git.baolin.wang@linux.alibaba.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
As Intel VT-d files have been moved to its own subdirectory, the prefix
makes no sense. No functional changes.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Link: https://lore.kernel.org/r/20200724014925.15523-13-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
After page requests are handled, software must respond to the device
which raised the page request with the result. This is done through
the iommu ops.page_response if the request was reported to outside of
vendor iommu driver through iommu_report_device_fault(). This adds the
VT-d implementation of page_response ops.
Co-developed-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Co-developed-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20200724014925.15523-12-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
A pasid might be bound to a page table from a VM guest via the iommu
ops.sva_bind_gpasid. In this case, when a DMA page fault is detected
on the physical IOMMU, we need to inject the page fault request into
the guest. After the guest completes handling the page fault, a page
response need to be sent back via the iommu ops.page_response().
This adds support to report a page request fault. Any external module
which is interested in handling this fault should regiester a notifier
with iommu_register_device_fault_handler().
Co-developed-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Co-developed-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20200724014925.15523-11-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
There are several places in the code that need to get the pointers of
svm and sdev according to a pasid and device. Add a helper to achieve
this for code consolidation and readability.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20200724014925.15523-10-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
It is refactored in two ways:
- Make it global so that it could be used in other files.
- Make bus/devfn optional so that callers could ignore these two returned
values when they only want to get the coresponding iommu pointer.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20200724014925.15523-9-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
For the unlikely use case where multiple aux domains from the same pdev
are attached to a single guest and then bound to a single process
(thus same PASID) within that guest, we cannot easily support this case
by refcounting the number of users. As there is only one SL page table
per PASID while we have multiple aux domains thus multiple SL page tables
for the same PASID.
Extra unbinding guest PASID can happen due to race between normal and
exception cases. Termination of one aux domain may affect others unless
we actively track and switch aux domains to ensure the validity of SL
page tables and TLB states in the shared PASID entry.
Support for sharing second level PGDs across domains can reduce the
complexity but this is not available due to the limitations on VFIO
container architecture. We can revisit this decision once sharing PGDs
are available.
Overall, the complexity and potential glitch do not warrant this unlikely
use case thereby removed by this patch.
Fixes: 56722a4398a30 ("iommu/vt-d: Add bind guest PASID support")
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20200724014925.15523-8-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
For guest requested IOTLB invalidation, address and mask are provided as
part of the invalidation data. VT-d HW silently ignores any address bits
below the mask. SW shall also allow such case but give warning if
address does not align with the mask. This patch relax the fault
handling from error to warning and proceed with invalidation request
with the given mask.
Fixes: 6ee1b77ba3ac0 ("iommu/vt-d: Add svm/sva invalidate function")
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Link: https://lore.kernel.org/r/20200724014925.15523-7-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
For guest SVA usage, in order to optimize for less VMEXIT, guest request
of IOTLB flush also includes device TLB.
On the host side, IOMMU driver performs IOTLB and implicit devTLB
invalidation. When PASID-selective granularity is requested by the guest
we need to derive the equivalent address range for devTLB instead of
using the address information in the UAPI data. The reason for that is,
unlike IOTLB flush, devTLB flush does not support PASID-selective
granularity. This is to say, we need to set the following in the PASID
based devTLB invalidation descriptor:
- entire 64 bit range in address ~(0x1 << 63)
- S bit = 1 (VT-d CH 6.5.2.6).
Without this fix, device TLB flush range is not set properly for PASID
selective granularity. This patch also merged devTLB flush code for both
implicit and explicit cases.
Fixes: 6ee1b77ba3ac ("iommu/vt-d: Add svm/sva invalidate function")
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Link: https://lore.kernel.org/r/20200724014925.15523-6-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Address information for device TLB invalidation comes from userspace
when device is directly assigned to a guest with vIOMMU support.
VT-d requires page aligned address. This patch checks and enforce
address to be page aligned, otherwise reserved bits can be set in the
invalidation descriptor. Unrecoverable fault will be reported due to
non-zero value in the reserved bits.
Fixes: 61a06a16e36d8 ("iommu/vt-d: Support flushing more translation cache types")
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Link: https://lore.kernel.org/r/20200724014925.15523-5-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
DevTLB flush can be used for both DMA request with and without PASIDs.
The former uses PASID#0 (RID2PASID), latter uses non-zero PASID for SVA
usage.
This patch adds a check for PASID value such that devTLB flush with
PASID is used for SVA case. This is more efficient in that multiple
PASIDs can be used by a single device, when tearing down a PASID entry
we shall flush only the devTLB specific to a PASID.
Fixes: 6f7db75e1c46 ("iommu/vt-d: Add second level page table")
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Link: https://lore.kernel.org/r/20200724014925.15523-4-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Global pages support is removed from VT-d spec 3.0 for dev TLB
invalidation. This patch is to remove the bits for vSVA. Similar change
already made for the native SVA. See the link below.
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Link: https://lore.kernel.org/linux-iommu/20190830142919.GE11578@8bytes.org/T/
Link: https://lore.kernel.org/r/20200724014925.15523-3-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The device may be torn down, but the domain should still be valid. Lets
use that as the tlb flush ops cookie.
Fixes a problem reported in [1]
[1] https://lkml.org/lkml/2020/7/20/104
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Fixes: 09b5dfff9ad6 ("iommu/qcom: Use accessor functions for iommu private data")
Link: https://lore.kernel.org/r/20200720155217.274994-1-robdclark@gmail.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The sparse tool complains as follows:
drivers/iommu/iommu.c:386:5: warning:
symbol 'iommu_insert_resv_region' was not declared. Should it be static?
drivers/iommu/iommu.c:2182:5: warning:
symbol '__iommu_map' was not declared. Should it be static?
Those functions are not used outside of iommu.c, so mark them static.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Link: https://lore.kernel.org/r/20200713142542.50294-1-weiyongjun1@huawei.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The free_pages() does zero check, therefore remove double zero
check here.
Signed-off-by: Libing Zhou <libing.zhou@nokia-sbell.com>
Link: https://lore.kernel.org/r/20200722064450.GA63618@hzling02.china.nsn-net.net
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
It is possible for the call to omap_iommu_dump_ctx to return
a negative error number, so check for the failure and return
the error number rather than pass the negative value to
simple_read_from_buffer.
Fixes: 14e0e6796a0d ("OMAP: iommu: add initial debugfs support")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Link: https://lore.kernel.org/r/20200714192211.744776-1-colin.king@canonical.com
Addresses-Coverity: ("Improper use of negative value")
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The name "update_pte" is a little too generic, and can end up clashing
with architecture pagetable code leaked out of common mm headers. Rename
it to something more appropriately namespaced.
Reported-by: kernel test robot <lkp@intel.com>
Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/829bb5dc18e734870b75db673ddce86e7e37fc73.1594727968.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Add an entry for r8a77961 in soc_rcar_gen3[] list so that we dont
enable iommu unconditionally.
Fixes: 17fe161816398 ("iommu/renesas: Add support for r8a77961")
Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/1594722055-9298-3-git-send-email-prabhakar.mahadev-lad.rj@bp.renesas.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Add support for RZ/G2H (R8A774E1) SoC IPMMUs.
Signed-off-by: Marian-Cristian Rotariu <marian-cristian.rotariu.rb@bp.renesas.com>
Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/1594722055-9298-2-git-send-email-prabhakar.mahadev-lad.rj@bp.renesas.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Add global/context fault hooks to allow vendor specific implementations
override default fault interrupt handlers.
Update NVIDIA implementation to override the default global/context fault
interrupt handlers and handle interrupts across the two ARM MMU-500s that
are programmed identically.
Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-by: Nicolin Chen <nicoleotsuka@gmail.com>
Reviewed-by: Pritesh Raithatha <praithatha@nvidia.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Thierry Reding <thierry.reding@gmail.com>
Link: https://lore.kernel.org/r/20200718193457.30046-6-vdumpa@nvidia.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
NVIDIA's Tegra194 SoC has three ARM MMU-500 instances.
It uses two of the ARM MMU-500s together to interleave IOVA
accesses across them and must be programmed identically.
This implementation supports programming the two ARM MMU-500s
that must be programmed identically.
The third ARM MMU-500 instance is supported by standard
arm-smmu.c driver itself.
Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-by: Nicolin Chen <nicoleotsuka@gmail.com>
Reviewed-by: Pritesh Raithatha <praithatha@nvidia.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Thierry Reding <thierry.reding@gmail.com>
Link: https://lore.kernel.org/r/20200718193457.30046-4-vdumpa@nvidia.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
ioremap smmu mmio region before calling into implementation init.
This is necessary to allow mapped address available during vendor
specific implementation init.
Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-by: Nicolin Chen <nicoleotsuka@gmail.com>
Reviewed-by: Pritesh Raithatha <praithatha@nvidia.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Thierry Reding <thierry.reding@gmail.com>
Link: https://lore.kernel.org/r/20200718193457.30046-3-vdumpa@nvidia.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Move TLB timeout and spin count macros to header file to
allow using the same from vendor specific implementations.
Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-by: Nicolin Chen <nicoleotsuka@gmail.com>
Reviewed-by: Pritesh Raithatha <praithatha@nvidia.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Thierry Reding <thierry.reding@gmail.com>
Link: https://lore.kernel.org/r/20200718193457.30046-2-vdumpa@nvidia.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into master
Pull irq fixes from Thomas Gleixner:
"Two fixes for the interrupt subsystem:
- Make the handling of the firmware node consistent and do not free
the node after the domain has been created successfully. The core
code stores a pointer to it which can lead to a use after free or
double free.
This used to "work" because the pointer was not stored when the
initial code was written, but at some point later it was required
to store it. Of course nobody noticed that the existing users break
that way.
- Handle affinity setting on inactive interrupts correctly when
hierarchical irq domains are enabled.
When interrupts are inactive with the modern hierarchical irqdomain
design, the interrupt chips are not necessarily in a state where
affinity changes can be handled. The legacy irq chip design allowed
this because interrupts are immediately fully initialized at
allocation time. X86 has a hacky workaround for this, but other
implementations do not.
This cased malfunction on GIC-V3. Instead of playing whack a mole
to find all affected drivers, change the core code to store the
requested affinity setting and then establish it when the interrupt
is allocated, which makes the X86 hack go away"
* tag 'irq-urgent-2020-07-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
genirq/affinity: Handle affinity setting on inactive interrupts correctly
irqdomain/treewide: Keep firmware node unconditionally allocated
|
|
Set "cmq" -> "cmdq".
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Due to erratum #582743, the Marvell Armada-AP806 can't access 64bit to
ARM SMMUv2 registers.
Provide implementation relevant hooks:
- split the writeq/readq to two accesses of writel/readl.
- mask the MMU_IDR2.PTFSv8 fields to not use AArch64 format (but
only AARCH32_L) since with AArch64 format 32 bits access is not supported.
Note that most 64-bit registers like TTBRn can be accessed as two 32-bit
halves without issue, and AArch32 format ensures that the register writes
which must be atomic (for TLBI etc.) need only be 32-bit.
Signed-off-by: Hanna Hawa <hannah@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Tomasz Nowicki <tn@semihalf.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20200715070649.18733-3-tn@semihalf.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
'cfg_probe' hook is called at the very end of configuration probing
procedure and therefore features override and workaround may become
complex like for ID register fixups. In preparation for adding Marvell
errata move 'cfg_probe' a bit earlier to have chance to adjust
the detected features before we start consuming them.
Since the Cavium quirk (the only user) does not alter features
it is safe to do so.
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Tomasz Nowicki <tn@semihalf.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20200715070649.18733-2-tn@semihalf.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Quite some non OF/ACPI users of irqdomains allocate firmware nodes of type
IRQCHIP_FWNODE_NAMED or IRQCHIP_FWNODE_NAMED_ID and free them right after
creating the irqdomain. The only purpose of these FW nodes is to convey
name information. When this was introduced the core code did not store the
pointer to the node in the irqdomain. A recent change stored the firmware
node pointer in irqdomain for other reasons and missed to notice that the
usage sites which do the alloc_fwnode/create_domain/free_fwnode sequence
are broken by this. Storing a dangling pointer is dangerous itself, but in
case that the domain is destroyed later on this leads to a double free.
Remove the freeing of the firmware node after creating the irqdomain from
all affected call sites to cure this.
Fixes: 711419e504eb ("irqdomain: Add the missing assignment of domain->fwnode for named fwnode")
Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/873661qakd.fsf@nanos.tec.linutronix.de
|
|
This fixes a compile error when cross-compiling the driver
on x86-32.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Link: https://lore.kernel.org/r/20200713101648.32056-1-joro@8bytes.org
|
|
Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.
Deterministic algorithm:
For each file:
If not .svg:
For each line:
If doesn't contain `\bxmlns\b`:
For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
If both the HTTP and HTTPS versions
return 200 OK and serve the same content:
Replace HTTP with HTTPS.
Signed-off-by: Alexander A. Klimov <grandmaster@al2klimov.de>
Link: https://lore.kernel.org/r/20200708210434.22518-1-grandmaster@al2klimov.de
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
1. Start from mt6779, INVLDT_SEL move to offset=0x2c, so we add
REG_MMU_INV_SEL_GEN2 definition and mt6779 uses it.
2. Add mt6779_data to support mm_iommu HW init.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Yong Wu <yong.wu@mediatek.com>
Link: https://lore.kernel.org/r/20200703044127.27438-11-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The MMU_CTRL register of MT8173 is different from other SoCs.
The in_order_wr_en is bit[9] which is zero by default.
Other SoCs have the vitcim_tlb_en feature mapped to bit[12].
This bit is set to one by default. We need to preserve the bit
when setting F_MMU_TF_PROT_TO_PROGRAM_ADDR as otherwise the
bit will be cleared and IOMMU performance will drop.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Yong Wu <yong.wu@mediatek.com>
Link: https://lore.kernel.org/r/20200703044127.27438-10-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Starting with mt6779, iommu needs to extend to 256 bytes from 128
bytes which can send the max number of data for memory protection
pa alignment. So we can use a separate patch to modify it.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Link: https://lore.kernel.org/r/20200703044127.27438-9-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Some platforms(ex: mt6779) need to improve performance by setting
REG_MMU_WR_LEN_CTRL register. And we can use WR_THROT_EN macro to control
whether we need to set the register. If the register uses default value,
iommu will send command to EMI without restriction, when the number of
commands become more and more, it will drop the EMI performance. So when
more than ten_commands(default value) don't be handled for EMI, iommu will
stop send command to EMI for keeping EMI's performace by enabling write
throttling mechanism(bit[5][21]=0) in MMU_WR_LEN_CTRL register.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Link: https://lore.kernel.org/r/20200703044127.27438-8-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
The max larb number that a iommu HW support is 8(larb0~larb7 in the below
diagram).
If the larb's number is over 8, we use a sub_common for merging
several larbs into one larb. At this case, we will extend larb_id:
bit[11:9] means common-id;
bit[8:7] means subcommon-id;
>From these two variables, we could get the real larb number when
translation fault happen.
The diagram is as below:
EMI
|
IOMMU
|
-----------------
| |
common1 common0
| |
-----------------
|
smi common
|
------------------------------------
| | | | | |
3'd0 3'd1 3'd2 3'd3 ... 3'd7 <-common_id(max is 8)
| | | | | |
Larb0 Larb1 | Larb3 ... Larb7
|
smi sub common
|
--------------------------
| | | |
2'd0 2'd1 2'd2 2'd3 <-sub_common_id(max is 4)
| | | |
Larb8 Larb9 Larb10 Larb11
In this patch we extend larb_remap[] to larb_remap[8][4] for this.
larb_remap[x][y]: x means common-id above, y means subcommon_id above.
We can also distinguish if the M4U HW has sub_common by HAS_SUB_COMM
macro.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Link: https://lore.kernel.org/r/20200703044127.27438-7-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
For mt6779, MMU_INV_SEL register's offset is changed from
0x38 to 0x2c, so we can put inv_sel_reg in the plat_data to
use it.
In addition, we renamed it to REG_MMU_INV_SEL_GEN1 and use it
before mt6779.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Yong Wu <yong.wu@mediatek.com>
Link: https://lore.kernel.org/r/20200703044127.27438-6-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Add F_MMU_IN_ORDER_WR_EN_MASK and F_MMU_STANDARD_AXI_MODE_EN_MASK
definitions in MISC_CTRL register.
F_MMU_STANDARD_AXI_MODE_EN_MASK:
If we set F_MMU_STANDARD_AXI_MODE_EN_MASK (bit[3][19] = 0, not follow
standard AXI protocol), the iommu will priorize sending of urgent read
command over a normal read command. This improves the performance.
F_MMU_IN_ORDER_WR_EN_MASK:
If we set F_MMU_IN_ORDER_WR_EN_MASK (bit[1][17] = 0, out-of-order write),
the iommu will re-order write commands and send the write commands with
higher priority. Otherwise the sending of write commands will be done in
order. The feature is controlled by OUT_ORDER_WR_EN platform data flag.
Suggested-by: Yong Wu <yong.wu@mediatek.com>
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Link: https://lore.kernel.org/r/20200703044127.27438-5-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Given the fact that we are adding more and more plat_data bool values,
it would make sense to use a u32 flags register and add the appropriate
macro definitions to set and check for a flag present.
No functional change.
Suggested-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Yong Wu <yong.wu@mediatek.com>
Link: https://lore.kernel.org/r/20200703044127.27438-4-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
For iommu offset=0x48 register, only the previous mt8173/mt8183 use the
name STANDARD_AXI_MODE, all the latest SoC extend the register more
feature by different bits, for example: axi_mode, in_order_en, coherent_en
and so on. So rename REG_MMU_MISC_CTRL may be more proper.
This patch only rename the register name, no functional change.
Signed-off-by: Chao Hao <chao.hao@mediatek.com>
Reviewed-by: Yong Wu <yong.wu@mediatek.com>
Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com>
Link: https://lore.kernel.org/r/20200703044127.27438-3-chao.hao@mediatek.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
This driver shouldn't need anything architecture-specific (that isn't
under CONFIG_ARM protection already), and has already been accessible
from certain x86 configurations by virtue of the previously-cleaned-up
"ARM || IOMMU_DMA" dependency. Allow COMPILE_TEST for all architectures.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/1fe2006aa98f008a2e689adba6e8c96e9197f903.1593791968.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Wacky COMPILE_TEST dependencies based on who used to define
dev_archdata.iommu can go.
Dependencies on ARM or ARM64 already implied by the ARCH_* platform
selection can go.
The entire IOMMU_SUPPORT menu already depends on MMU, so those can go.
IOMMU_DMA is for the architecture's DMA API implementation to choose,
and its interface to IOMMU drivers is properly stubbed out if disabled,
so dependencies on or selections of that can go (AMD_IOMMU is the
current exception since the x86 drivers have to provide their own entire
dma_map_ops implementation).
Since commit ed6ccf10f24b ("dma-mapping: properly stub out the DMA API
for !CONFIG_HAS_DMA"), drivers which simply use the dma-mapping API
should not need to depend on HAS_DMA, so those can go.
And a long-dead option for code removed from the MSM driver 4 years ago
can also go.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/7fb9c74dc6bd12a4619ca44c92408e91352f1be0.1593791968.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
When CONFIG_OF=n of_match_device() gets pre-processed out of existence
leaving qcom-smmu_client_of_match unused. Mark it as possibly unused to
keep the compiler from warning in that case.
Fixes: 0e764a01015d ("iommu/arm-smmu: Allow client devices to select direct mapping")
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200604203905.31964-1-jcrouse@codeaurora.org
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
In pci_disable_sriov(), i.e.,
# echo 0 > /sys/class/net/enp11s0f1np1/device/sriov_numvfs
iommu_release_device
iommu_group_remove_device
arm_smmu_domain_free
kfree(smmu_domain)
Later,
iommu_release_device
arm_smmu_release_device
arm_smmu_detach_dev
spin_lock_irqsave(&smmu_domain->devices_lock,
would trigger an use-after-free. Fixed it by call
arm_smmu_release_device() first before iommu_group_remove_device().
BUG: KASAN: use-after-free in __lock_acquire+0x3458/0x4440
__lock_acquire at kernel/locking/lockdep.c:4250
Read of size 8 at addr ffff0089df1a6f68 by task bash/3356
CPU: 5 PID: 3356 Comm: bash Not tainted 5.8.0-rc3-next-20200630 #2
Hardware name: HPE Apollo 70 /C01_APACHE_MB , BIOS L50_5.13_1.11 06/18/2019
Call trace:
dump_backtrace+0x0/0x398
show_stack+0x14/0x20
dump_stack+0x140/0x1b8
print_address_description.isra.12+0x54/0x4a8
kasan_report+0x134/0x1b8
__asan_report_load8_noabort+0x2c/0x50
__lock_acquire+0x3458/0x4440
lock_acquire+0x204/0xf10
_raw_spin_lock_irqsave+0xf8/0x180
arm_smmu_detach_dev+0xd8/0x4a0
arm_smmu_detach_dev at drivers/iommu/arm-smmu-v3.c:2776
arm_smmu_release_device+0xb4/0x1c8
arm_smmu_disable_pasid at drivers/iommu/arm-smmu-v3.c:2754
(inlined by) arm_smmu_release_device at drivers/iommu/arm-smmu-v3.c:3000
iommu_release_device+0xc0/0x178
iommu_release_device at drivers/iommu/iommu.c:302
iommu_bus_notifier+0x118/0x160
notifier_call_chain+0xa4/0x128
__blocking_notifier_call_chain+0x70/0xa8
blocking_notifier_call_chain+0x14/0x20
device_del+0x618/0xa00
pci_remove_bus_device+0x108/0x2d8
pci_stop_and_remove_bus_device+0x1c/0x28
pci_iov_remove_virtfn+0x228/0x368
sriov_disable+0x8c/0x348
pci_disable_sriov+0x5c/0x70
mlx5_core_sriov_configure+0xd8/0x260 [mlx5_core]
sriov_numvfs_store+0x240/0x318
dev_attr_store+0x38/0x68
sysfs_kf_write+0xdc/0x128
kernfs_fop_write+0x23c/0x448
__vfs_write+0x54/0xe8
vfs_write+0x124/0x3f0
ksys_write+0xe8/0x1b8
__arm64_sys_write+0x68/0x98
do_el0_svc+0x124/0x220
el0_sync_handler+0x260/0x408
el0_sync+0x140/0x180
Allocated by task 3356:
save_stack+0x24/0x50
__kasan_kmalloc.isra.13+0xc4/0xe0
kasan_kmalloc+0xc/0x18
kmem_cache_alloc_trace+0x1ec/0x318
arm_smmu_domain_alloc+0x54/0x148
iommu_group_alloc_default_domain+0xc0/0x440
iommu_probe_device+0x1c0/0x308
iort_iommu_configure+0x434/0x518
acpi_dma_configure+0xf0/0x128
pci_dma_configure+0x114/0x160
really_probe+0x124/0x6d8
driver_probe_device+0xc4/0x180
__device_attach_driver+0x184/0x1e8
bus_for_each_drv+0x114/0x1a0
__device_attach+0x19c/0x2a8
device_attach+0x10/0x18
pci_bus_add_device+0x70/0xf8
pci_iov_add_virtfn+0x7b4/0xb40
sriov_enable+0x5c8/0xc30
pci_enable_sriov+0x64/0x80
mlx5_core_sriov_configure+0x58/0x260 [mlx5_core]
sriov_numvfs_store+0x1c0/0x318
dev_attr_store+0x38/0x68
sysfs_kf_write+0xdc/0x128
kernfs_fop_write+0x23c/0x448
__vfs_write+0x54/0xe8
vfs_write+0x124/0x3f0
ksys_write+0xe8/0x1b8
__arm64_sys_write+0x68/0x98
do_el0_svc+0x124/0x220
el0_sync_handler+0x260/0x408
el0_sync+0x140/0x180
Freed by task 3356:
save_stack+0x24/0x50
__kasan_slab_free+0x124/0x198
kasan_slab_free+0x10/0x18
slab_free_freelist_hook+0x110/0x298
kfree+0x128/0x668
arm_smmu_domain_free+0xf4/0x1a0
iommu_group_release+0xec/0x160
kobject_put+0xf4/0x238
kobject_del+0x110/0x190
kobject_put+0x1e4/0x238
iommu_group_remove_device+0x394/0x938
iommu_release_device+0x9c/0x178
iommu_release_device at drivers/iommu/iommu.c:300
iommu_bus_notifier+0x118/0x160
notifier_call_chain+0xa4/0x128
__blocking_notifier_call_chain+0x70/0xa8
blocking_notifier_call_chain+0x14/0x20
device_del+0x618/0xa00
pci_remove_bus_device+0x108/0x2d8
pci_stop_and_remove_bus_device+0x1c/0x28
pci_iov_remove_virtfn+0x228/0x368
sriov_disable+0x8c/0x348
pci_disable_sriov+0x5c/0x70
mlx5_core_sriov_configure+0xd8/0x260 [mlx5_core]
sriov_numvfs_store+0x240/0x318
dev_attr_store+0x38/0x68
sysfs_kf_write+0xdc/0x128
kernfs_fop_write+0x23c/0x448
__vfs_write+0x54/0xe8
vfs_write+0x124/0x3f0
ksys_write+0xe8/0x1b8
__arm64_sys_write+0x68/0x98
do_el0_svc+0x124/0x220
el0_sync_handler+0x260/0x408
el0_sync+0x140/0x180
The buggy address belongs to the object at ffff0089df1a6e00
which belongs to the cache kmalloc-512 of size 512
The buggy address is located 360 bytes inside of
512-byte region [ffff0089df1a6e00, ffff0089df1a7000)
The buggy address belongs to the page:
page:ffffffe02257c680 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff0089df1a1400
flags: 0x7ffff800000200(slab)
raw: 007ffff800000200 ffffffe02246b8c8 ffffffe02257ff88 ffff000000320680
raw: ffff0089df1a1400 00000000002a000e 00000001ffffffff ffff0089df1a5001
page dumped because: kasan: bad access detected
page->mem_cgroup:ffff0089df1a5001
Memory state around the buggy address:
ffff0089df1a6e00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff0089df1a6e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff0089df1a6f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff0089df1a6f80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff0089df1a7000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
Fixes: a6a4c7e2c5b8 ("iommu: Add probe_device() and release_device() call-backs")
Signed-off-by: Qian Cai <cai@lca.pw>
Link: https://lore.kernel.org/r/20200704001003.2303-1-cai@lca.pw
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
|
Use the qcom implementation for IOMMU hardware on sm8150 and sm8250 SoCs.
Signed-off-by: Jonathan Marek <jonathan@marek.ca>
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Link: https://lore.kernel.org/r/20200609194030.17756-3-jonathan@marek.ca
Signed-off-by: Will Deacon <will@kernel.org>
|
|
The comment about implementation and integration quirks being
mutually-exclusive is out of date, and in fact the code is already
structured for the case it anticipates, so document that properly.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/1e742177e084621f3454fbaf768325a6c215656a.1592994291.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|