summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-11-12Revert "mmc: dw_mmc: Fix IDMAC operation with pages bigger than 4K"Aurelien Jarno
The commit 8396c793ffdf ("mmc: dw_mmc: Fix IDMAC operation with pages bigger than 4K") increased the max_req_size, even for 4K pages, causing various issues: - Panic booting the kernel/rootfs from an SD card on Rockchip RK3566 - Panic booting the kernel/rootfs from an SD card on StarFive JH7100 - "swiotlb buffer is full" and data corruption on StarFive JH7110 At this stage no fix have been found, so it's probably better to just revert the change. This reverts commit 8396c793ffdf28bb8aee7cfe0891080f8cab7890. Cc: stable@vger.kernel.org Cc: Sam Protsenko <semen.protsenko@linaro.org> Fixes: 8396c793ffdf ("mmc: dw_mmc: Fix IDMAC operation with pages bigger than 4K") Closes: https://lore.kernel.org/linux-mmc/614692b4-1dbe-31b8-a34d-cb6db1909bb7@w6rz.net/ Closes: https://lore.kernel.org/linux-mmc/CAC8uq=Ppnmv98mpa1CrWLawWoPnu5abtU69v-=G-P7ysATQ2Pw@mail.gmail.com/ Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-ID: <20241110114700.622372-1-aurelien@aurel32.net> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-11-12mmc: pwrseq_simple: Handle !RESET_CONTROLLER properlyStefan Wahren
The recent introduction of reset control in pwrseq_simple introduced a regression for platforms without RESET_CONTROLLER support, because devm_reset_control_get_optional_shared() would return NULL and make all resets no-ops. Instead of enforcing this dependency, rely on this behavior to determine reset support. As a benefit we can get the rid of the use_reset flag. Fixes: 73bf4b7381f7 ("mmc: pwrseq_simple: add support for one reset control") Signed-off-by: Stefan Wahren <wahrenst@gmx.net> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Message-ID: <20241108130647.8281-1-wahrenst@gmx.net> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-11-12mmc: mtk-sd: Fix MMC_CAP2_CRYPTO flag settingAndy-ld Lu
Currently, the MMC_CAP2_CRYPTO flag is set by default for eMMC hosts. However, this flag should not be set for hosts that do not support inline encryption. The 'crypto' clock, as described in the documentation, is used for data encryption and decryption. Therefore, only hosts that are configured with this 'crypto' clock should have the MMC_CAP2_CRYPTO flag set. Fixes: 7b438d0377fb ("mmc: mtk-sd: add Inline Crypto Engine clock control") Fixes: ed299eda8fbb ("mmc: mtk-sd: fix devm_clk_get_optional usage") Signed-off-by: Andy-ld Lu <andy-ld.lu@mediatek.com> Cc: stable@vger.kernel.org Message-ID: <20241111085039.26527-1-andy-ld.lu@mediatek.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-11-12Revert "wifi: iwlegacy: do not skip frames with bad FCS"Kalle Valo
This reverts commit 02b682d54598f61cbb7dbb14d98ec1801112b878. Alf reports that this commit causes the connection to eventually die on iwl4965. The reason is that rx_status.flag is zeroed after RX_FLAG_FAILED_FCS_CRC is set and mac80211 doesn't know the received frame is corrupted. Fixes: 02b682d54598 ("wifi: iwlegacy: do not skip frames with bad FCS") Reported-by: Alf Marius <post@alfmarius.net> Closes: https://lore.kernel.org/r/60f752e8-787e-44a8-92ae-48bdfc9b43e7@app.fastmail.com/ Signed-off-by: Kalle Valo <kvalo@kernel.org> Link: https://patch.msgid.link/20241112142419.1023743-1-kvalo@kernel.org
2024-11-12selftests: hugetlb_dio: fixup check for initial conditions to skip in the startDonet Tom
This test verifies that a hugepage, used as a user buffer for DIO operations, is correctly freed upon unmapping. To test this, we read the count of free hugepages before and after the mmap, DIO, and munmap operations, then check if the free hugepage count is the same. Reading free hugepages before the test was removed by commit 0268d4579901 ('selftests: hugetlb_dio: check for initial conditions to skip at the start'), causing the test to always fail. This patch adds back reading the free hugepages before starting the test. With this patch, the tests are now passing. Test results without this patch: ./tools/testing/selftests/mm/hugetlb_dio TAP version 13 1..4 # No. Free pages before allocation : 0 # No. Free pages after munmap : 100 not ok 1 : Huge pages not freed! # No. Free pages before allocation : 0 # No. Free pages after munmap : 100 not ok 2 : Huge pages not freed! # No. Free pages before allocation : 0 # No. Free pages after munmap : 100 not ok 3 : Huge pages not freed! # No. Free pages before allocation : 0 # No. Free pages after munmap : 100 not ok 4 : Huge pages not freed! # Totals: pass:0 fail:4 xfail:0 xpass:0 skip:0 error:0 Test results with this patch: /tools/testing/selftests/mm/hugetlb_dio TAP version 13 1..4 # No. Free pages before allocation : 100 # No. Free pages after munmap : 100 ok 1 : Huge pages freed successfully ! # No. Free pages before allocation : 100 # No. Free pages after munmap : 100 ok 2 : Huge pages freed successfully ! # No. Free pages before allocation : 100 # No. Free pages after munmap : 100 ok 3 : Huge pages freed successfully ! # No. Free pages before allocation : 100 # No. Free pages after munmap : 100 ok 4 : Huge pages freed successfully ! # Totals: pass:4 fail:0 xfail:0 xpass:0 skip:0 error:0 Link: https://lkml.kernel.org/r/20241110064903.23626-1-donettom@linux.ibm.com Fixes: 0268d4579901 ("selftests: hugetlb_dio: check for initial conditions to skip in the start") Signed-off-by: Donet Tom <donettom@linux.ibm.com> Cc: Muhammad Usama Anjum <usama.anjum@collabora.com> Cc: Shuah Khan <shuah@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-11-12mm/thp: fix deferred split queue not partially_mapped: fixHugh Dickins
Though even more elusive than before, list_del corruption has still been seen on THP's deferred split queue. The idea in commit e66f3185fa04 was right, but its implementation wrong. The context omitted an important comment just before the critical test: "split_folio() removes folio from list on success." In ignoring that comment, when a THP split succeeded, the code went on to release the preceding safe folio, preserving instead an irrelevant (formerly head) folio: which gives no safety because it's not on the list. Fix the logic. Link: https://lkml.kernel.org/r/3c995a30-31ce-0998-1b9f-3a2cb9354c91@google.com Fixes: e66f3185fa04 ("mm/thp: fix deferred split queue not partially_mapped") Signed-off-by: Hugh Dickins <hughd@google.com> Acked-by: Usama Arif <usamaarif642@gmail.com> Reviewed-by: Zi Yan <ziy@nvidia.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Chris Li <chrisl@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Wei Yang <richard.weiyang@gmail.com> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-11-12mm/gup: avoid an unnecessary allocation call for FOLL_LONGTERM casesJohn Hubbard
commit 53ba78de064b ("mm/gup: introduce check_and_migrate_movable_folios()") created a new constraint on the pin_user_pages*() API family: a potentially large internal allocation must now occur, for FOLL_LONGTERM cases. A user-visible consequence has now appeared: user space can no longer pin more than 2GB of memory anymore on x86_64. That's because, on a 4KB PAGE_SIZE system, when user space tries to (indirectly, via a device driver that calls pin_user_pages()) pin 2GB, this requires an allocation of a folio pointers array of MAX_PAGE_ORDER size, which is the limit for kmalloc(). In addition to the directly visible effect described above, there is also the problem of adding an unnecessary allocation. The **pages array argument has already been allocated, and there is no need for a redundant **folios array allocation in this case. Fix this by avoiding the new allocation entirely. This is done by referring to either the original page[i] within **pages, or to the associated folio. Thanks to David Hildenbrand for suggesting this approach and for providing the initial implementation (which I've tested and adjusted slightly) as well. [jhubbard@nvidia.com: whitespace tweak, per David] Link: https://lkml.kernel.org/r/131cf9c8-ebc0-4cbb-b722-22fa8527bf3c@nvidia.com [jhubbard@nvidia.com: bypass pofs_get_folio(), per Oscar] Link: https://lkml.kernel.org/r/c1587c7f-9155-45be-bd62-1e36c0dd6923@nvidia.com Link: https://lkml.kernel.org/r/20241105032944.141488-2-jhubbard@nvidia.com Fixes: 53ba78de064b ("mm/gup: introduce check_and_migrate_movable_folios()") Signed-off-by: John Hubbard <jhubbard@nvidia.com> Suggested-by: David Hildenbrand <david@redhat.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> Cc: Vivek Kasireddy <vivek.kasireddy@intel.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Peter Xu <peterx@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Dongwon Kim <dongwon.kim@intel.com> Cc: Hugh Dickins <hughd@google.com> Cc: Junxiao Chang <junxiao.chang@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-11-12iommu/arm-smmu-v3: Support IOMMU_HWPT_INVALIDATE using a VIOMMU objectNicolin Chen
Implement the vIOMMU's cache_invalidate op for user space to invalidate the IOTLB entries, Device ATS and CD entries that are cached by hardware. Add struct iommu_viommu_arm_smmuv3_invalidate defining invalidation entries that are simply in the native format of a 128-bit TLBI command. Scan those commands against the permitted command list and fix their VMID/SID fields to match what is stored in the vIOMMU. Link: https://patch.msgid.link/r/12-v4-9e99b76f3518+3a8-smmuv3_nesting_jgg@nvidia.com Co-developed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Eric Auger <eric.auger@redhat.com> Co-developed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommu/arm-smmu-v3: Allow ATS for IOMMU_DOMAIN_NESTEDJason Gunthorpe
The EATS flag needs to flow through the vSTE and into the pSTE, and ensure physical ATS is enabled on the PCI device. The physical ATS state must match the VM's idea of EATS as we rely on the VM to issue the ATS invalidation commands. Thus ATS must remain off at the device until EATS on a nesting domain turns it on. Attaching a nesting domain is the point where the invalidation responsibility transfers to userspace. Update the ATS logic to track EATS for nesting domains and flush the ATC whenever the S2 nesting parent changes. Link: https://patch.msgid.link/r/11-v4-9e99b76f3518+3a8-smmuv3_nesting_jgg@nvidia.com Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommu/arm-smmu-v3: Use S2FWB for NESTED domainsJason Gunthorpe
Force Write Back (FWB) changes how the S2 IOPTE's MemAttr field works. When S2FWB is supported and enabled the IOPTE will force cachable access to IOMMU_CACHE memory when nesting with a S1 and deny cachable access when !IOMMU_CACHE. When using a single stage of translation, a simple S2 domain, it doesn't change things for PCI devices as it is just a different encoding for the existing mapping of the IOMMU protection flags to cachability attributes. For non-PCI it also changes the combining rules when incoming transactions have inconsistent attributes. However, when used with a nested S1, FWB has the effect of preventing the guest from choosing a MemAttr in it's S1 that would cause ordinary DMA to bypass the cache. Consistent with KVM we wish to deny the guest the ability to become incoherent with cached memory the hypervisor believes is cachable so we don't have to flush it. Allow NESTED domains to be created if the SMMU has S2FWB support and use S2FWB for NESTING_PARENTS. This is an additional option to CANWBS. Link: https://patch.msgid.link/r/10-v4-9e99b76f3518+3a8-smmuv3_nesting_jgg@nvidia.com Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Donald Dutile <ddutile@redhat.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTEDJason Gunthorpe
For SMMUv3 a IOMMU_DOMAIN_NESTED is composed of a S2 iommu_domain acting as the parent and a user provided STE fragment that defines the CD table and related data with addresses translated by the S2 iommu_domain. The kernel only permits userspace to control certain allowed bits of the STE that are safe for user/guest control. IOTLB maintenance is a bit subtle here, the S1 implicitly includes the S2 translation, but there is no way of knowing which S1 entries refer to a range of S2. For the IOTLB we follow ARM's guidance and issue a CMDQ_OP_TLBI_NH_ALL to flush all ASIDs from the VMID after flushing the S2 on any change to the S2. The IOMMU_DOMAIN_NESTED can only be created from inside a VIOMMU as the invalidation path relies on the VIOMMU to translate virtual stream ID used in the invalidation commands for the CD table and ATS. Link: https://patch.msgid.link/r/9-v4-9e99b76f3518+3a8-smmuv3_nesting_jgg@nvidia.com Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Reviewed-by: Donald Dutile <ddutile@redhat.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommu/arm-smmu-v3: Support IOMMU_VIOMMU_ALLOCNicolin Chen
Add a new driver-type for ARM SMMUv3 to enum iommu_viommu_type. Implement an arm_vsmmu_alloc(). As an initial step, copy the VMID from s2_parent. A followup series is required to give the VIOMMU object it's own VMID that will be used in all nesting configurations. Link: https://patch.msgid.link/r/8-v4-9e99b76f3518+3a8-smmuv3_nesting_jgg@nvidia.com Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12Merge branch 'iommufd/arm-smmuv3-nested' of iommu/linux into iommufd for-nextJason Gunthorpe
Common SMMUv3 patches for the following patches adding nesting, shared branch with the iommu tree. * 'iommufd/arm-smmuv3-nested' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/iommu/linux: iommu/arm-smmu-v3: Expose the arm_smmu_attach interface iommu/arm-smmu-v3: Implement IOMMU_HWPT_ALLOC_NEST_PARENT iommu/arm-smmu-v3: Support IOMMU_GET_HW_INFO via struct arm_smmu_hw_info iommu/arm-smmu-v3: Report IOMMU_CAP_ENFORCE_CACHE_COHERENCY for CANWBS ACPI/IORT: Support CANWBS memory access flag ACPICA: IORT: Update for revision E.f vfio: Remove VFIO_TYPE1_NESTING_IOMMU ... Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12mmc: mtk-sd: Fix error handle of probe functionAndy-ld Lu
In the probe function, it goes to 'release_mem' label and returns after some procedure failure. But if the clocks (partial or all) have been enabled previously, they would not be disabled in msdc_runtime_suspend, since runtime PM is not yet enabled for this case. That cause mmc related clocks always on during system suspend and block suspend flow. Below log is from a SDCard issue of MT8196 chromebook, it returns -ETIMEOUT while polling clock stable in the msdc_ungate_clock() and probe failed, but the enabled clocks could not be disabled anyway. [ 129.059253] clk_chk_dev_pm_suspend() [ 129.350119] suspend warning: msdcpll is on [ 129.354494] [ck_msdc30_1_sel : enabled, 1, 1, 191999939, ck_msdcpll_d2] [ 129.362787] [ck_msdcpll_d2 : enabled, 1, 1, 191999939, msdcpll] [ 129.371041] [ck_msdc30_1_ck : enabled, 1, 1, 191999939, ck_msdc30_1_sel] [ 129.379295] [msdcpll : enabled, 1, 1, 383999878, clk26m] Add a new 'release_clk' label and reorder the error handle functions to make sure the clocks be disabled after probe failure. Fixes: ffaea6ebfe9c ("mmc: mtk-sd: Use readl_poll_timeout instead of open-coded polling") Fixes: 7a2fa8eed936 ("mmc: mtk-sd: use devm_mmc_alloc_host") Signed-off-by: Andy-ld Lu <andy-ld.lu@mediatek.com> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Cc: stable@vger.kernel.org Message-ID: <20241107121215.5201-1-andy-ld.lu@mediatek.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-11-12mmc: sunxi-mmc: Fix A100 compatible descriptionAndre Przywara
It turns out that the Allwinner A100/A133 SoC only supports 8K DMA blocks (13 bits wide), for both the SD/SDIO and eMMC instances. And while this alone would make a trivial fix, the H616 falls back to the A100 compatible string, so we have to now match the H616 compatible string explicitly against the description advertising 64K DMA blocks. As the A100 is now compatible with the D1 description, let the A100 compatible string point to that block instead, and introduce an explicit match against the H616 string, pointing to the old description. Also remove the redundant setting of clk_delays to NULL on the way. Fixes: 3536b82e5853 ("mmc: sunxi: add support for A100 mmc controller") Cc: stable@vger.kernel.org Signed-off-by: Andre Przywara <andre.przywara@arm.com> Tested-by: Parthiban Nallathambi <parthiban@linumiz.com> Reviewed-by: Chen-Yu Tsai <wens@csie.org> Message-ID: <20241107014240.24669-1-andre.przywara@arm.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-11-12mmc: core: Correction a warning caused by incorrect type in assignment for ↵Victor Shih
UHS-II There is a type issue in the assignment in the sd_uhs2_dev_init() that will generate a warning when building the kernel. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202411051248.wvjHSFNj-lkp@intel.com/ Signed-off-by: Ben Chuang <ben.chuang@genesyslogic.com.tw> Signed-off-by: Victor Shih <victor.shih@genesyslogic.com.tw> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Message-ID: <20241105102901.351429-1-victorshihgli@gmail.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-11-12Bluetooth: btintel: Direct exception event to bluetooth stackKiran K
Have exception event part of HCI traces which helps for debug. snoop traces: > HCI Event: Vendor (0xff) plen 79 Vendor Prefix (0x8780) Intel Extended Telemetry (0x03) Unknown extended telemetry event type (0xde) 01 01 de Unknown extended subevent 0x07 01 01 de 07 01 de 06 1c ef be ad de ef be ad de ef be ad de ef be ad de ef be ad de ef be ad de ef be ad de 05 14 ef be ad de ef be ad de ef be ad de ef be ad de ef be ad de 43 10 ef be ad de ef be ad de ef be ad de ef be ad de Fixes: af395330abed ("Bluetooth: btintel: Add Intel devcoredump support") Signed-off-by: Kiran K <kiran.k@intel.com> Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2024-11-12Bluetooth: hci_core: Fix calling mgmt_device_connectedLuiz Augusto von Dentz
Since 61a939c68ee0 ("Bluetooth: Queue incoming ACL data until BT_CONNECTED state is reached") there is no long the need to call mgmt_device_connected as ACL data will be queued until BT_CONNECTED state. Link: https://bugzilla.kernel.org/show_bug.cgi?id=219458 Link: https://github.com/bluez/bluez/issues/1014 Fixes: 333b4fd11e89 ("Bluetooth: L2CAP: Fix uaf in l2cap_connect") Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2024-11-12ARM: 9420/1: smp: Fix SMP for xip kernelsHarith G
Fix the physical address calculation of the following to get smp working on xip kernels. - secondary_data needed for secondary cpu bootup. - secondary_startup address passed through psci. - identity mapped code region needed for enabling mmu for secondary cpus. Signed-off-by: Harith George <harith.g@alifsemi.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-11-12ARM: 9419/1: mm: Fix kernel memory mapping for xip kernelsHarith G
The patchset introducing kernel_sec_start/end variables to separate the kernel/lowmem memory mappings, broke the mapping of the kernel memory for xipkernels. kernel_sec_start/end variables are in RO area before the MMU is switched on for xipkernels. So these cannot be set early in boot in head.S. Fix this by setting these after MMU is switched on. xipkernels need two different mappings for kernel text (starting at CONFIG_XIP_PHYS_ADDR) and data (starting at CONFIG_PHYS_OFFSET). Also, move the kernel code mapping from devicemaps_init() to map_kernel(). Fixes: a91da5457085 ("ARM: 9089/1: Define kernel physical section start and end") Signed-off-by: Harith George <harith.g@alifsemi.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2024-11-12Documentation: userspace-api: iommufd: Update vDEVICENicolin Chen
With the introduction of the new object and its infrastructure, update the doc and the vIOMMU graph to reflect that. Link: https://patch.msgid.link/r/e1ff278b7163909b2641ae04ff364bb41d2a2a2e.1730836308.git.nicolinc@nvidia.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/selftest: Add vIOMMU coverage for IOMMU_HWPT_INVALIDATE ioctlNicolin Chen
Add a viommu_cache test function to cover vIOMMU invalidations using the updated IOMMU_HWPT_INVALIDATE ioctl, which now allows passing in a vIOMMU via its hwpt_id field. Link: https://patch.msgid.link/r/f317f902041f3d05deaee4ca3fdd8ef4b8297361.1730836308.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/selftest: Add IOMMU_TEST_OP_DEV_CHECK_CACHE test commandNicolin Chen
Similar to IOMMU_TEST_OP_MD_CHECK_IOTLB verifying a mock_domain's iotlb, IOMMU_TEST_OP_DEV_CHECK_CACHE will be used to verify a mock_dev's cache. Link: https://patch.msgid.link/r/cd4082079d75427bd67ed90c3c825e15b5720a5f.1730836308.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/selftest: Add mock_viommu_cache_invalidateNicolin Chen
Similar to the coverage of cache_invalidate_user for iotlb invalidation, add a device cache and a viommu_cache_invalidate function to test it out. Link: https://patch.msgid.link/r/a29c7c23d7cd143fb26ab68b3618e0957f485fdb.1730836308.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/viommu: Add iommufd_viommu_find_dev helperNicolin Chen
This avoids a bigger trouble of exposing struct iommufd_device and struct iommufd_vdevice in the public header. Link: https://patch.msgid.link/r/84fa7c624db4d4508067ccfdf42059533950180a.1730836308.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommu: Add iommu_copy_struct_from_full_user_array helperJason Gunthorpe
The iommu_copy_struct_from_user_array helper can be used to copy a single entry from a user array which might not be efficient if the array is big. Add a new iommu_copy_struct_from_full_user_array to copy the entire user array at once. Update the existing iommu_copy_struct_from_user_array kdoc accordingly. Link: https://patch.msgid.link/r/5cd773d9c26920c5807d232b21d415ea79172e49.1730836308.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd: Allow hwpt_id to carry viommu_id for IOMMU_HWPT_INVALIDATENicolin Chen
With a vIOMMU object, use space can flush any IOMMU related cache that can be directed via a vIOMMU object. It is similar to the IOMMU_HWPT_INVALIDATE uAPI, but can cover a wider range than IOTLB, e.g. device/desciprtor cache. Allow hwpt_id of the iommu_hwpt_invalidate structure to carry a viommu_id, and reuse the IOMMU_HWPT_INVALIDATE uAPI for vIOMMU invalidations. Drivers can define different structures for vIOMMU invalidations v.s. HWPT ones. Since both the HWPT-based and vIOMMU-based invalidation pathways check own cache invalidation op, remove the WARN_ON_ONCE in the allocator. Update the uAPI, kdoc, and selftest case accordingly. Link: https://patch.msgid.link/r/b411e2245e303b8a964f39f49453a5dff280968f.1730836308.git.nicolinc@nvidia.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommu/viommu: Add cache_invalidate to iommufd_viommu_opsNicolin Chen
This per-vIOMMU cache_invalidate op is like the cache_invalidate_user op in struct iommu_domain_ops, but wider, supporting device cache (e.g. PCI ATC invaldiations). Link: https://patch.msgid.link/r/90138505850fa6b165135e78a87b4cc7022869a4.1730836308.git.nicolinc@nvidia.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/selftest: Add IOMMU_VDEVICE_ALLOC test coverageNicolin Chen
Add a vdevice_alloc op to the viommu mock_viommu_ops for the coverage of IOMMU_VIOMMU_TYPE_SELFTEST allocations. Then, add a vdevice_alloc TEST_F to cover the IOMMU_VDEVICE_ALLOC ioctl. Link: https://patch.msgid.link/r/4b9607e5b86726c8baa7b89bd48123fb44104a23.1730836308.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/viommu: Add IOMMUFD_OBJ_VDEVICE and IOMMU_VDEVICE_ALLOC ioctlNicolin Chen
Introduce a new IOMMUFD_OBJ_VDEVICE to represent a physical device (struct device) against a vIOMMU (struct iommufd_viommu) object in a VM. This vDEVICE object (and its structure) holds all the infos and attributes in the VM, regarding the device related to the vIOMMU. As an initial patch, add a per-vIOMMU virtual ID. This can be: - Virtual StreamID on a nested ARM SMMUv3, an index to a Stream Table - Virtual DeviceID on a nested AMD IOMMU, an index to a Device Table - Virtual RID on a nested Intel VT-D IOMMU, an index to a Context Table Potentially, this vDEVICE structure would hold some vData for Confidential Compute Architecture (CCA). Use this virtual ID to index an "vdevs" xarray that belongs to a vIOMMU object. Add a new ioctl for vDEVICE allocations. Since a vDEVICE is a connection of a device object and an iommufd_viommu object, take two refcounts in the ioctl handler. Link: https://patch.msgid.link/r/cda8fd2263166e61b8191a3b3207e0d2b08545bf.1730836308.git.nicolinc@nvidia.com Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12Documentation: userspace-api: iommufd: Update vIOMMUNicolin Chen
With the introduction of the new object and its infrastructure, update the doc to reflect that and add a new graph. Link: https://patch.msgid.link/r/7e4302064e0d02137c1b1e139342affc0485ed3f.1730836219.git.nicolinc@nvidia.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/selftest: Add IOMMU_VIOMMU_ALLOC test coverageNicolin Chen
Add a new iommufd_viommu FIXTURE and setup it up with a vIOMMU object. Any new vIOMMU feature will be added as a TEST_F under that. Link: https://patch.msgid.link/r/abe267c9d004b29cb1712ceba2f378209d4b7e01.1730836219.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/selftest: Add IOMMU_VIOMMU_TYPE_SELFTESTNicolin Chen
Implement the viommu alloc/free functions to increase/reduce refcount of its dependent mock iommu device. User space can verify this loop via the IOMMU_VIOMMU_TYPE_SELFTEST. Link: https://patch.msgid.link/r/9d755a215a3007d4d8d1c2513846830332db62aa.1730836219.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/selftest: Add refcount to mock_iommu_deviceNicolin Chen
For an iommu_dev that can unplug (so far only this selftest does so), the viommu->iommu_dev pointer has no guarantee of its life cycle after it is copied from the idev->dev->iommu->iommu_dev. Track the user count of the iommu_dev. Postpone the exit routine using a completion, if refcount is unbalanced. The refcount inc/dec will be added in the following patch. Link: https://patch.msgid.link/r/33f28d64841b497eebef11b49a571e03103c5d24.1730836219.git.nicolinc@nvidia.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/selftest: Prepare for mock_viommu_alloc_domain_nested()Nicolin Chen
A nested domain now can be allocated for a parent domain or for a vIOMMU object. Rework the existing allocators to prepare for the latter case. Link: https://patch.msgid.link/r/f62894ad8ccae28a8a616845947fe4b76135d79b.1730836219.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/selftest: Add container_of helpersNicolin Chen
Use these inline helpers to shorten those container_of lines. Note that one of them goes back and forth between iommu_domain and mock_iommu_domain, which isn't necessary. So drop its container_of. Link: https://patch.msgid.link/r/518ec64dae2e814eb29fd9f170f58a3aad56c81c.1730836219.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd: Allow pt_id to carry viommu_id for IOMMU_HWPT_ALLOCNicolin Chen
Now a vIOMMU holds a shareable nesting parent HWPT. So, it can act like that nesting parent HWPT to allocate a nested HWPT. Support that in the IOMMU_HWPT_ALLOC ioctl handler, and update its kdoc. Also, add an iommufd_viommu_alloc_hwpt_nested helper to allocate a nested HWPT for a vIOMMU object. Since a vIOMMU object holds the parent hwpt's refcount already, increase the refcount of the vIOMMU only. Link: https://patch.msgid.link/r/a0f24f32bfada8b448d17587adcaedeeb50a67ed.1730836219.git.nicolinc@nvidia.com Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd: Add alloc_domain_nested op to iommufd_viommu_opsNicolin Chen
Allow IOMMU driver to use a vIOMMU object that holds a nesting parent hwpt/domain to allocate a nested domain. Link: https://patch.msgid.link/r/2dcdb5e405dc0deb68230564530d989d285d959c.1730836219.git.nicolinc@nvidia.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd/viommu: Add IOMMU_VIOMMU_ALLOC ioctlNicolin Chen
Add a new ioctl for user space to do a vIOMMU allocation. It must be based on a nesting parent HWPT, so take its refcount. IOMMU driver wanting to support vIOMMUs must define its IOMMU_VIOMMU_TYPE_ in the uAPI header and implement a viommu_alloc op in its iommu_ops. Link: https://patch.msgid.link/r/dc2b8ba9ac935007beff07c1761c31cd097ed780.1730836219.git.nicolinc@nvidia.com Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd: Verify object in iommufd_object_finalize/abort()Nicolin Chen
To support driver-allocated vIOMMU objects, it's required for IOMMU driver to call the provided iommufd_viommu_alloc helper to embed the core struct. However, there is no guarantee that every driver will call it and allocate objects properly. Make the iommufd_object_finalize/abort functions more robust to verify if the xarray slot indexed by the input obj->id is having an XA_ZERO_ENTRY, which is the reserved value stored by xa_alloc via iommufd_object_alloc. Link: https://patch.msgid.link/r/334bd4dde8e0a88eb30fa67eeef61827cdb546f9.1730836219.git.nicolinc@nvidia.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd: Introduce IOMMUFD_OBJ_VIOMMU and its related structNicolin Chen
Add a new IOMMUFD_OBJ_VIOMMU with an iommufd_viommu structure to represent a slice of physical IOMMU device passed to or shared with a user space VM. This slice, now a vIOMMU object, is a group of virtualization resources of a physical IOMMU's, such as: - Security namespace for guest owned ID, e.g. guest-controlled cache tags - Non-device-affiliated event reporting, e.g. invalidation queue errors - Access to a sharable nesting parent pagetable across physical IOMMUs - Virtualization of various platforms IDs, e.g. RIDs and others - Delivery of paravirtualized invalidation - Direct assigned invalidation queues - Direct assigned interrupts Add a new viommu_alloc op in iommu_ops, for drivers to allocate their own vIOMMU structures. And this allocation also needs a free(), so add struct iommufd_viommu_ops. To simplify a vIOMMU allocation, provide a iommufd_viommu_alloc() helper. It's suggested that a driver should embed a core-level viommu structure in its driver-level viommu struct and call the iommufd_viommu_alloc() helper, meanwhile the driver can also implement a viommu ops: struct my_driver_viommu { struct iommufd_viommu core; /* driver-owned properties/features */ .... }; static const struct iommufd_viommu_ops my_driver_viommu_ops = { .free = my_driver_viommu_free, /* future ops for virtualization features */ .... }; static struct iommufd_viommu my_driver_viommu_alloc(...) { struct my_driver_viommu *my_viommu = iommufd_viommu_alloc(ictx, my_driver_viommu, core, my_driver_viommu_ops); /* Init my_viommu and related HW feature */ .... return &my_viommu->core; } static struct iommu_domain_ops my_driver_domain_ops = { .... .viommu_alloc = my_driver_viommu_alloc, }; Link: https://patch.msgid.link/r/64685e2b79dea0f1dc56f6ede04809b72d578935.1730836219.git.nicolinc@nvidia.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12iommufd: Move _iommufd_object_alloc helper to a sharable fileNicolin Chen
The following patch will add a new vIOMMU allocator that will require this _iommufd_object_alloc to be sharable with IOMMU drivers (and iommufd too). Add a new driver.c file that will be built with CONFIG_IOMMUFD_DRIVER_CORE selected by CONFIG_IOMMUFD, and put the CONFIG_DRIVER under that remaining to be selectable for drivers to build the existing iova_bitmap.c file. Link: https://patch.msgid.link/r/2f4f6e116dc49ffb67ff6c5e8a7a8e789ab9e98e.1730836219.git.nicolinc@nvidia.com Suggested-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-11-12drivers: perf: Fix wrong put_cpu() placementAlexandre Ghiti
Unfortunately, the wrong patch version was merged which places the put_cpu() after enabling a static key, which is not safe as pointed by Will [1], so move put_cpu() before to avoid this. Fixes: 2840dadf0dde ("drivers: perf: Fix smp_processor_id() use in preemptible code") Reported-by: Atish Patra <atishp@rivosinc.com> Link: https://lore.kernel.org/all/20240827125335.GD4772@willie-the-truck/ [1] Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20241112113422.617954-1-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-11-12kselftest/arm64: Try harder to generate different keys during PAC testsMark Brown
We very intermittently see failures in the single_thread_different_keys PAC test. As noted in the comment in the test the PAC field can be quite narrow so there is a chance of collisions even with different keys with a chance of 5% for 7 bit keys, and the potential for narrower keys. The test tries to avoid this by running repeatedly, but only tries 10 times which even with a 5% chance of collisions isn't enough. Increase the number of times we attempt to look for collisions by a factor of 100, this also affects other tests which are following a similar pattern with running the test repeatedly and either don't care like with pac_instruction_not_nop or potentially have the same issue like exec_sign_all. The PAC tests are very fast, running in a second or two even in emulation, so the 100x increased cost is mildly irritating but not a huge issue. The bulk of the overhead is in the exec_sign_all test which does a fork() and exec() per iteration. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241111-arm64-pac-test-collisions-v1-2-171875f37e44@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-11-12kselftest/arm64: Don't leak pipe fds in pac.exec_sign_all()Mark Brown
The PAC exec_sign_all() test spawns some child processes, creating pipes to be stdin and stdout for the child. It cleans up most of the file descriptors that are created as part of this but neglects to clean up the parent end of the child stdin and stdout. Add the missing close() calls. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20241111-arm64-pac-test-collisions-v1-1-171875f37e44@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-11-12platform: cznic: turris-omnia-mcu: Rename variable holding GPIO line namesMarek Behún
Rename the `omnia_mcu_gpio_templates` variable to `omnia_mcu_gpio_names`. The array contained templates for the names during the development of the driver, but the template prefix `gpio%u.` was dropped before the driver was merged, since this functionality was broken in gpiolib. Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-11-12platform: cznic: turris-omnia-mcu: Document the driver private data structureMarek Behún
Add more comprehensive documentation for the driver private data structure, `struct omnia_mcu`. Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-11-12firmware: turris-mox-rwtm: Document the driver private data structureMarek Behún
Add more comprehensive documentation for the driver private data structure, `struct mox_rwtm`. Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-11-12Merge tag 'ti-driver-soc-for-v6.13' of ↵Arnd Bergmann
https://git.kernel.org/pub/scm/linux/kernel/git/ti/linux into soc/drivers TI SoC driver updates for v6.13 - knav_qmss_queue: Cleanups around request_irq params and redundant code. - ti_sci: Power management ops in preperation for suspend/resume capability. Also includes dependency patch to export dev_pm_qos_read_value (acked by Rafael). * tag 'ti-driver-soc-for-v6.13' of https://git.kernel.org/pub/scm/linux/kernel/git/ti/linux: firmware: ti_sci: Remove use of of_match_ptr() helper firmware: ti_sci: add CPU latency constraint management firmware: ti_sci: Introduce Power Management Ops firmware: ti_sci: Add system suspend and resume call firmware: ti_sci: Add support for querying the firmware caps PM: QoS: Export dev_pm_qos_read_value soc: ti: knav_qmss_queue: Drop redundant continue statement soc: ti: knav_qmss_queue: Use IRQF_NO_AUTOEN flag in request_irq() Link: https://lore.kernel.org/r/20241106121708.rso5wvc7wbhfi6xk@maverick Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-11-12Merge tag 'reset-for-v6.13' of git://git.pengutronix.de/pza/linux into ↵Arnd Bergmann
soc/drivers Reset controller updates for v6.13 * Split the Amlogic reset-meson driver into platform and auxiliary bus drivers. Add support for the reset controller in the G12 and SM1 audio clock controllers. * Replace the list of boolean parameters to the internal reset_control_get functions with an enum reset_flags bitfield, to make the code more self-descriptive. * Add devres helpers to request pre-deasserted (and automatically re-asserting during cleanup) reset controls. This allows reducing boilerplate in drivers that deassert resets for the lifetime of a device. * Use the new auto-deasserting devres helpers in reset-uniphier-glue as an example. * Add support for the LAN966x PCI device in drivers/misc, as a dependency for the following reset-microchip-sparx5 patches. * Add support for being used on the LAN966x PCI device to the reset-microchip-sparx5 driver. Commit 86f134941a4b ("MAINTAINERS: Add the Microchip LAN966x PCI driver entry") introduces a trivial merge conflict with commit 7280f01e79cc ("net: lan969x: add match data for lan969x") from the net-next tree [1]. [1] https://lore.kernel.org/all/20241101122505.3eacd183@canb.auug.org.au/ * tag 'reset-for-v6.13' of git://git.pengutronix.de/pza/linux: (21 commits) misc: lan966x_pci: Fix dtc warn 'Missing interrupt-parent' misc: lan966x_pci: Fix dtc warns 'missing or empty reg/ranges property' reset: mchp: sparx5: set the dev member of the reset controller reset: mchp: sparx5: Allow building as a module reset: mchp: sparx5: Add MCHP_LAN966X_PCI dependency reset: mchp: sparx5: Map cpu-syscon locally in case of LAN966x MAINTAINERS: Add the Microchip LAN966x PCI driver entry misc: Add support for LAN966x PCI device reset: uniphier-glue: Use devm_reset_control_bulk_get_shared_deasserted() reset: Add devres helpers to request pre-deasserted reset controls reset: replace boolean parameters with flags parameter reset: amlogic: Fix small whitespace issue reset: amlogic: add auxiliary reset driver support reset: amlogic: split the device core and platform probe reset: amlogic: move drivers to a dedicated directory reset: amlogic: add reset status support reset: amlogic: use reset number instead of register count reset: amlogic: add driver parameters reset: amlogic: make parameters unsigned reset: amlogic: use generic data matching function ... Link: https://lore.kernel.org/r/20241105105229.3729474-1-p.zabel@pengutronix.de Signed-off-by: Arnd Bergmann <arnd@arndb.de>