summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-12-05iommu/vt-d: Switch to MSI parent domainsThomas Gleixner
Remove the global PCI/MSI irqdomain implementation and provide the required MSI parent ops so the PCI/MSI code can detect the new parent and setup per device domains. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232326.151226317@linutronix.de
2022-12-05PCI/MSI: Remove unused pci_dev_has_special_msi_domain()Thomas Gleixner
The check for special MSI domains like VMD which prevents the interrupt remapping code to overwrite device::msi::domain is not longer required and has been replaced by an x86 specific version which is aware of MSI parent domains. Remove it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232326.093093200@linutronix.de
2022-12-05x86/apic/vector: Provide MSI parent domainThomas Gleixner
Enable MSI parent domain support in the x86 vector domain and fixup the checks in the iommu implementations to check whether device::msi::domain is the default MSI parent domain. That keeps the existing logic to protect e.g. devices behind VMD working. The interrupt remap PCI/MSI code still works because the underlying vector domain still provides the same functionality. None of the other x86 PCI/MSI, e.g. XEN and HyperV, implementations are affected either. They still work the same way both at the low level and the PCI/MSI implementations they provide. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232326.034672592@linutronix.de
2022-12-05PCI/MSI: Add support for per device MSI[X] domainsThomas Gleixner
Provide a template and the necessary callbacks to create PCI/MSI and PCI/MSI-X domains. The domains are created when MSI or MSI-X is enabled. The domain's lifetime is either the device lifetime or in case that e.g. MSI-X was tried first and failed, then the MSI-X domain is removed and a MSI domain is created as both are mutually exclusive and reside in the default domain ID slot of the per device domain pointer array. Also expand pci_msi_domain_supports() to handle feature checks correctly even in the case that the per device domain was not yet created by checking the features supported by the MSI parent. Add the necessary setup calls into the MSI and MSI-X enable code path. These setup calls are backwards compatible. They return success when there is no parent domain found, which means the existing global domains or the legacy allocation path keep just working. Co-developed-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.975388241@linutronix.de
2022-12-05genirq/msi: Provide BUS_DEVICE_PCI_MSI[X]Thomas Gleixner
Provide new bus tokens for the upcoming per device PCI/MSI and PCI/MSIX interrupt domains. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.917219885@linutronix.de
2022-12-05PCI/MSI: Split __pci_write_msi_msg()Thomas Gleixner
The upcoming per device MSI domains will create different domains for MSI and MSI-X. Split the write message function into MSI and MSI-X helpers so they can be used by those new domain functions seperately. Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.857982142@linutronix.de
2022-12-05genirq/msi: Add range checking to msi_insert_desc()Thomas Gleixner
Per device domains provide the real domain size to the core code. This allows range checking on insertion of MSI descriptors and also paves the way for dynamic index allocations which are required e.g. for IMS. This avoids external mechanisms like bitmaps on the device side and just utilizes the core internal MSI descriptor storxe for it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.798556374@linutronix.de
2022-12-05dt-bindings: soc: qcom: aoss: Add compatible for SM8550Abel Vesa
Document the compatible for SM8550. Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221116113128.2655441-1-abel.vesa@linaro.org
2022-12-05soc: qcom: llcc: Add configuration data for SM8550Abel Vesa
Add LLCC configuration data for SM8550 SoC. Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221116113005.2653284-4-abel.vesa@linaro.org
2022-12-05dt-bindings: arm: msm: Add LLCC compatible for SM8550Abel Vesa
Add LLCC compatible for SM8550 SoC. Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221116113005.2653284-3-abel.vesa@linaro.org
2022-12-05soc: qcom: llcc: Add v4.1 HW version supportAbel Vesa
The LLCC found in SM8550 supports more slice configuration knobs and HW block version has been bumped up to 4.1. Add support for the new version and make sure the new config values are programed on probe. Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221116113005.2653284-2-abel.vesa@linaro.org
2022-12-05soc: qcom: socinfo: Add SM8550 IDAbel Vesa
Add the ID for the Qualcomm SM8550 SoC. Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221116112438.2643607-1-abel.vesa@linaro.org
2022-12-05soc: qcom: rpmh-rsc: Avoid unnecessary checks on irq-done responseAbel Vesa
The RSC interrupt is issued only after the request is complete. For fire-n-forget requests, the irq-done interrupt is sent after issuing the RPMH request and for response-required request, the interrupt is triggered only after all the requests are complete. These unnecessary checks in the interrupt handler issues AHB reads from a critical path. Lets remove them and clean up error handling in rpmh_request data structures. Co-developed-by: Lina Iyer <ilina@codeaurora.org> Signed-off-by: Lina Iyer <ilina@codeaurora.org> Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221116112246.2640648-2-abel.vesa@linaro.org
2022-12-05soc: qcom: rpmh-rsc: Add support for RSC v3 register offsetsAbel Vesa
The SM8550 RSC has a new set of register offsets due to its version bump. So read the version from HW and use the proper register offsets based on that. Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221116112246.2640648-1-abel.vesa@linaro.org
2022-12-05soc: qcom: rpmhpd: Add SM8550 power domainsAbel Vesa
Add the power domains exposed by RPMH in the Qualcomm SM8550 platform. Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221116111745.2633074-3-abel.vesa@linaro.org
2022-12-05dt-bindings: power: rpmpd: Add SM8550 to rpmpd bindingAbel Vesa
Add compatible and constants for the power domains exposed by the RPMH in the Qualcomm SM8550 platform. Signed-off-by: Abel Vesa <abel.vesa@linaro.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221116111745.2633074-2-abel.vesa@linaro.org
2022-12-05dt-bindings: arm: qcom: Add Xperia 5 IV (PDX224)Konrad Dybcio
Add a compatible for Sony Xperia 5 IV. Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221114095654.34561-1-konrad.dybcio@linaro.org
2022-12-05dt-bindings: arm: qcom: Document msm8956 and msm8976 SoC and devicesMarijn Suijten
Note that msm8976 is omitted as a compatible, since there are currently no boards/devices using it. Signed-off-by: Marijn Suijten <marijn.suijten@somainline.org> Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221111120156.48040-9-angelogioacchino.delregno@collabora.com
2022-12-05blk-throttle: Use more suitable time_after check for update of slice_startKemeng Shi
There is no need to update tg->slice_start[rw] to start when they are equal already. So remove "eq" part of check before update slice_start. Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Kemeng Shi <shikemeng@huawei.com> Link: https://lore.kernel.org/r/20221205115709.251489-10-shikemeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-12-05blk-throttle: remove repeat check of elapsed timeKemeng Shi
There is no need to check elapsed time from last upgrade for each node in hierarchy. Move this check before traversing as throtl_can_upgrade do to remove repeat check. Reported-by: kernel test robot <lkp@intel.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Kemeng Shi <shikemeng@huawei.com> Link: https://lore.kernel.org/r/20221205115709.251489-9-shikemeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-12-05blk-throttle: remove incorrect comment for tg_last_low_overflow_timeKemeng Shi
Function tg_last_low_overflow_time is called with intermediate node as following: throtl_hierarchy_can_downgrade throtl_tg_can_downgrade tg_last_low_overflow_time throtl_hierarchy_can_upgrade throtl_tg_can_upgrade tg_last_low_overflow_time throtl_hierarchy_can_downgrade/throtl_hierarchy_can_upgrade will traverse from leaf node to sub-root node and pass traversed intermediate node to tg_last_low_overflow_time. No such limit could be found from context and implementation of tg_last_low_overflow_time, so remove this limit in comment. Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Kemeng Shi <shikemeng@huawei.com> Link: https://lore.kernel.org/r/20221205115709.251489-8-shikemeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-12-05blk-throttle: fix typo in comment of throtl_adjusted_limitKemeng Shi
lapsed time -> elapsed time Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Kemeng Shi <shikemeng@huawei.com> Link: https://lore.kernel.org/r/20221205115709.251489-7-shikemeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-12-05blk-throttle: simpfy low limit reached check in throtl_tg_can_upgradeKemeng Shi
Commit c79892c557616 ("blk-throttle: add upgrade logic for LIMIT_LOW state") added upgrade logic for low limit and methioned that 1. "To determine if a cgroup exceeds its limitation, we check if the cgroup has pending request. Since cgroup is throttled according to the limit, pending request means the cgroup reaches the limit." 2. "If a cgroup has limit set for both read and write, we consider the combination of them for upgrade. The reason is read IO and write IO can interfere with each other. If we do the upgrade based in one direction IO, the other direction IO could be severly harmed." Besides, we also determine that cgroup reaches low limit if low limit is 0, see comment in throtl_tg_can_upgrade. Collect the information above, the desgin of upgrade check is as following: 1.The low limit is reached if limit is zero or io is already queued. 2.Cgroup will pass upgrade check if low limits of READ and WRITE are both reached. Simpfy the check code described above to removce repeat check and improve readability. There is no functional change. Detail equivalence proof is as following: All replaced conditions to return true are as following: condition 1 (!read_limit && !write_limit) condition 2 read_limit && sq->nr_queued[READ] && (!write_limit || sq->nr_queued[WRITE]) condition 3 write_limit && sq->nr_queued[WRITE] && (!read_limit || sq->nr_queued[READ]) Transferring condition 2 as following: (read_limit && sq->nr_queued[READ]) && (!write_limit || sq->nr_queued[WRITE]) is equivalent to (read_limit && sq->nr_queued[READ]) && (!write_limit || (write_limit && sq->nr_queued[WRITE])) is equivalent to condition 2.1 (read_limit && sq->nr_queued[READ] && !write_limit) || condition 2.2 (read_limit && sq->nr_queued[READ] && (write_limit && sq->nr_queued[WRITE])) Transferring condition 3 as following: write_limit && sq->nr_queued[WRITE] && (!read_limit || sq->nr_queued[READ]) is equivalent to (write_limit && sq->nr_queued[WRITE]) && (!read_limit || (read_limit && sq->nr_queued[READ])) is equivalent to condition 3.1 ((write_limit && sq->nr_queued[WRITE]) && !read_limit) || condition 3.2 ((write_limit && sq->nr_queued[WRITE]) && (read_limit && sq->nr_queued[READ])) Condition 3.2 is the same as condition 2.2, so all conditions we get to return are as following: (!read_limit && !write_limit) (1) (!read_limit && (write_limit && sq->nr_queued[WRITE])) (3.1) ((read_limit && sq->nr_queued[READ]) && !write_limit) (2.1) ((write_limit && sq->nr_queued[WRITE]) && (read_limit && sq->nr_queued[READ])) (2.2) As we can extract conditions "(a1 || a2) && (b1 || b2)" to: a1 && b1 a1 && b2 a2 && b1 ab && b2 Considering that: a1 = !read_limit a2 = read_limit && sq->nr_queued[READ] b1 = !write_limit b2 = write_limit && sq->nr_queued[WRITE] We can pack replaced conditions to (!read_limit || (read_limit && sq->nr_queued[READ])) && (!write_limit || (write_limit && sq->nr_queued[WRITE])) which is equivalent to (!read_limit || sq->nr_queued[READ]) && (!write_limit || sq->nr_queued[WRITE]) Reported-by: kernel test robot <lkp@intel.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Kemeng Shi <shikemeng@huawei.com> Link: https://lore.kernel.org/r/20221205115709.251489-6-shikemeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-12-05soc: qcom: socinfo: Add MSM8956/76 SoC IDs to the soc_id tableAngeloGioacchino Del Regno
Add SoC ID table entries for MSM8956 and MSM8976 chips. Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221111120156.48040-8-angelogioacchino.delregno@collabora.com
2022-12-05dt-bindings: arm: qcom,ids: Add SoC IDs for MSM8956 and MSM8976AngeloGioacchino Del Regno
Document the identifier of MSM8956/76. Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20221111120156.48040-7-angelogioacchino.delregno@collabora.com
2022-12-05blk-throttle: correct calculation of wait time in tg_may_dispatchKemeng Shi
In C language, When executing "if (expression1 && expression2)" and expression1 return false, the expression2 may not be executed. For "tg_within_bps_limit(tg, bio, bps_limit, &bps_wait) && tg_within_iops_limit(tg, bio, iops_limit, &iops_wait))", if bps is limited, tg_within_bps_limit will return false and tg_within_iops_limit will not be called. So even bps and iops are both limited, iops_wait will not be calculated and is always zero. So wait time of iops is always ignored. Fix this by always calling tg_within_bps_limit and tg_within_iops_limit to get wait time for both bps and iops. Observed that: 1. Wait time in tg_within_iops_limit/tg_within_bps_limit need always be stored as wait argument is always passed. 2. wait time is stored to zero if iops/bps is limited otherwise non-zero is stored. Simpfy tg_within_iops_limit/tg_within_bps_limit by removing wait argument and return wait time directly. Caller tg_may_dispatch checks if wait time is zero to find if iops/bps is limited. Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Kemeng Shi <shikemeng@huawei.com> Link: https://lore.kernel.org/r/20221205115709.251489-5-shikemeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-12-05blk-throttle: ignore cgroup without io queued in blk_throtl_cancel_biosKemeng Shi
Ignore cgroup without io queued in blk_throtl_cancel_bios for two reasons: 1. Save cpu cycle for trying to dispatch cgroup which is no io queued. 2. Avoid non-consistent state that cgroup is inserted to service queue without THROTL_TG_PENDING set as tg_update_disptime will unconditional re-insert cgroup to service queue. If we are on the default hierarchy, IO dispatched from child in tg_dispatch_one_bio will trigger inserting cgroup to service queue without erase first and ruin the tree. Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Kemeng Shi <shikemeng@huawei.com> Link: https://lore.kernel.org/r/20221205115709.251489-4-shikemeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-12-05blk-throttle: Fix that bps of child could exceed bps limited in parentKemeng Shi
Consider situation as following (on the default hierarchy): HDD | root (bps limit: 4k) | child (bps limit :8k) | fio bs=8k Rate of fio is supposed to be 4k, but result is 8k. Reason is as following: Size of single IO from fio is larger than bytes allowed in one throtl_slice in child, so IOs are always queued in child group first. When queued IOs in child are dispatched to parent group, BIO_BPS_THROTTLED is set and these IOs will not be limited by tg_within_bps_limit anymore. Fix this by only set BIO_BPS_THROTTLED when the bio traversed the entire tree. There patch has no influence on situation which is not on the default hierarchy as each group is a single root group without parent. Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Kemeng Shi <shikemeng@huawei.com> Link: https://lore.kernel.org/r/20221205115709.251489-3-shikemeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-12-05blk-throttle: correct stale comment in throtl_pd_initKemeng Shi
On the default hierarchy (cgroup2), the throttle interface files don't exist in the root cgroup, so the ablity to limit the whole system by configuring root group is not existing anymore. In general, cgroup doesn't wanna be in the business of restricting resources at the system level, so correct the stale comment that we can limit whole system to we can only limit subtree. Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Kemeng Shi <shikemeng@huawei.com> Link: https://lore.kernel.org/r/20221205115709.251489-2-shikemeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-12-05tools/testing/cxl: Require cache invalidation bypassDan Williams
The typical environment where cxl_test is run, QEMU, does not support cpu_cache_invalidate_memregion(). Add the 'test' bypass symbols to the configuration check. Reported-by: Alison Schofield <alison.schofield@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/167026948179.3527561.4535373655515827457.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-12-05cxl/acpi: Fail decoder add if CXIMS for HBIG is missingAlison Schofield
The BIOS provided CXIMS (CXL XOR Interleave Math Structure) is required for calculating a targets position in an interleave list during region creation. The CXL driver expects to discover a CXIMS that matches the HBIG (Host Bridge Interleave Granularity) and stores the xormaps found in that CXIMS for retrieval during region creation. If there is no CXIMS for an HBIG, no maps are stored. That leads to a NULL pointer dereference at xormap retrieval during region creation. Add a check during ACPI probe for the case of no matching CXIMS. Emit an error message and fail to add the decoder. Fixes: f9db85bfec0d ("cxl/acpi: Support CXL XOR Interleave Math (CXIMS)") Suggested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Alison Schofield <alison.schofield@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20221205002951.1788783-1-alison.schofield@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-12-05cxl/region: Fix spelling mistake "memergion" -> "memregion"Colin Ian King
There is a spelling mistake in a dev_warn message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Link: https://lore.kernel.org/r/20221205091819.1943564-1-colin.i.king@gmail.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-12-05cxl/regs: Fix sparse warningDan Williams
The 0day robot belatedly points out that @addr is not properly tagged as an iomap pointer: "drivers/cxl/core/regs.c:332:14: sparse: sparse: incorrect type in assignment (different address spaces) @@ expected void *addr @@ got void [noderef] __iomem * @@" Fixes: 1168271ca054 ("cxl/acpi: Extract component registers of restricted hosts from RCRB") Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Robert Richter <rrichter@amd.com> Reviewed-by: Alison Schofield <alison.schofield@intel.com> Link: https://lore.kernel.org/r/167008768190.2516013.11918622906007677341.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-12-05Merge branch 'for-6.2/cxl-xor' into for-6.2/cxlDan Williams
Pick up support for "XOR" interleave math when parsing ACPI CFMWS window structures. Fix up conflicts with the RCH emulation already pending in cxl/next.
2022-12-05Merge branch 'for-6.2/cxl-aer' into for-6.2/cxlDan Williams
Pick up CXL AER handling and correctable error extensions. Resolve conflicts with cxl_pmem_wq reworks and RCH support.
2022-12-05Merge branch 'for-6.2/cxl-security' into for-6.2/cxlDan Williams
Pick CXL PMEM security commands for v6.2. Resolve conflicts with the removal of the cxl_pmem_wq.
2022-12-05proc: proc_skip_spaces() shouldn't think it is working on C stringsLinus Torvalds
proc_skip_spaces() seems to think it is working on C strings, and ends up being just a wrapper around skip_spaces() with a really odd calling convention. Instead of basing it on skip_spaces(), it should have looked more like proc_skip_char(), which really is the exact same function (except it skips a particular character, rather than whitespace). So use that as inspiration, odd coding and all. Now the calling convention actually makes sense and works for the intended purpose. Reported-and-tested-by: Kyle Zeng <zengyhkyle@gmail.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-12-05proc: avoid integer type confusion in get_proc_longLinus Torvalds
proc_get_long() is passed a size_t, but then assigns it to an 'int' variable for the length. Let's not do that, even if our IO paths are limited to MAX_RW_COUNT (exactly because of these kinds of type errors). So do the proper test in the rigth type. Reported-by: Kyle Zeng <zengyhkyle@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-12-05NFS4.x/pnfs: Fix up logging of layout stateidsTrond Myklebust
If the layout is invalid, then just log a '0' value. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2022-12-05drm/vmwgfx: Don't use screen objects when SEV is activeZack Rusin
When SEV is enabled gmr's and mob's are explicitly disabled because the encrypted system memory can not be used by the hypervisor. The driver was disabling GMR's but the presentation code, which depends on GMR's, wasn't honoring it which lead to black screen on hosts with SEV enabled. Make sure screen objects presentation is not used when guest memory regions have been disabled to fix presentation on SEV enabled hosts. Fixes: 3b0d6458c705 ("drm/vmwgfx: Refuse DMA operation when SEV encryption is active") Cc: <stable@vger.kernel.org> # v5.7+ Signed-off-by: Zack Rusin <zackr@vmware.com> Reported-by: Nicholas Hunt <nhunt@vmware.com> Reviewed-by: Martin Krastev <krastevm@vmware.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221201175341.491884-1-zack@kde.org
2022-12-05ipc/sem: Fix dangling sem_array access in semtimedop raceJann Horn
When __do_semtimedop() goes to sleep because it has to wait for a semaphore value becoming zero or becoming bigger than some threshold, it links the on-stack sem_queue to the sem_array, then goes to sleep without holding a reference on the sem_array. When __do_semtimedop() comes back out of sleep, one of two things must happen: a) We prove that the on-stack sem_queue has been disconnected from the (possibly freed) sem_array, making it safe to return from the stack frame that the sem_queue exists in. b) We stabilize our reference to the sem_array, lock the sem_array, and detach the sem_queue from the sem_array ourselves. sem_array has RCU lifetime, so for case (b), the reference can be stabilized inside an RCU read-side critical section by locklessly checking whether the sem_queue is still connected to the sem_array. However, the current code does the lockless check on sem_queue before starting an RCU read-side critical section, so the result of the lockless check immediately becomes useless. Fix it by doing rcu_read_lock() before the lockless check. Now RCU ensures that if we observe the object being on our queue, the object can't be freed until rcu_read_unlock(). This bug is only hittable on kernel builds with full preemption support (either CONFIG_PREEMPT or PREEMPT_DYNAMIC with preempt=full). Fixes: 370b262c896e ("ipc/sem: avoid idr tree lookup for interrupted semop") Cc: stable@vger.kernel.org Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-12-05cxl/acpi: Set ACPI's CXL _OSC to indicate RCD mode supportTerry Bowman
ACPI uses the CXL _OSC support method to communicate the available CXL functionality to FW. The CXL _OSC support method includes a field to indicate the OS is capable of RCD mode. FW can potentially change it's operation depending on the _OSC support method reported by the OS. The ACPI driver currently only sets the ACPI _OSC support method to indicate CXL VH mode. Change the capability reported to also include CXL RCD mode. [1] CXL3.0 Table 9-26 'Interpretation of CXL _OSC Support Field' Signed-off-by: Terry Bowman <terry.bowman@amd.com> [rrichter@amd.com: Reworded patch description.] Signed-off-by: Robert Richter <rrichter@amd.com> Link: http://lore.kernel.org/r/Y4cRV/Sj0epVW7bE@rric.localdomain Link: https://lore.kernel.org/r/166993046717.1882361.10587956243041624761.stgit@dwillia2-xfh.jf.intel.com Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-12-05tools/testing/cxl: Add an RCH topologyDan Williams
In an RCH topology a CXL host-bridge as Root Complex Integrated Endpoint the represents the memory expander. Unlike a VH topology there is no CXL/PCIE Root Port that host the endpoint. The CXL subsystem maps this as the CXL root object (ACPI0017 on ACPI based systems) targeting the host-bridge as a dport, per usual, but then that dport directly hosts the endpoint port. Mock up that configuration with a 4th host-bridge that has a 'cxl_rcd' device instance as its immediate child. Reviewed-by: Alison Schofield <alison.schofield@intel.com> Link: https://lore.kernel.org/r/166993046170.1882361.12460762475782283638.stgit@dwillia2-xfh.jf.intel.com Reviewed-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-12-05cxl/port: Add RCD endpoint port enumerationDan Williams
Unlike a CXL memory expander in a VH topology that has at least one intervening 'struct cxl_port' instance between itself and the CXL root device, an RCD attaches one-level higher. For example: VH ┌──────────┐ │ ACPI0017 │ │ root0 │ └─────┬────┘ │ ┌─────┴────┐ │ dport0 │ ┌─────┤ ACPI0016 ├─────┐ │ │ port1 │ │ │ └────┬─────┘ │ │ │ │ ┌──┴───┐ ┌──┴───┐ ┌───┴──┐ │dport0│ │dport1│ │dport2│ │ RP0 │ │ RP1 │ │ RP2 │ └──────┘ └──┬───┘ └──────┘ │ ┌───┴─────┐ │endpoint0│ │ port2 │ └─────────┘ ...vs: RCH ┌──────────┐ │ ACPI0017 │ │ root0 │ └────┬─────┘ │ ┌───┴────┐ │ dport0 │ │ACPI0016│ └───┬────┘ │ ┌────┴─────┐ │endpoint0 │ │ port1 │ └──────────┘ So arrange for endpoint port in the RCH/RCD case to appear directly connected to the host-bridge in its singular role as a dport. Compare that to the VH case where the host-bridge serves a dual role as a 'cxl_dport' for the CXL root device *and* a 'cxl_port' upstream port for the Root Ports in the Root Complex that are modeled as 'cxl_dport' instances in the CXL topology. Another deviation from the VH case is that RCDs may need to look up their component registers from the Root Complex Register Block (RCRB). That platform firmware specified RCRB area is cached by the cxl_acpi driver and conveyed via the host-bridge dport to the cxl_mem driver to perform the cxl_rcrb_to_component() lookup for the endpoint port (See 9.11.8 CXL Devices Attached to an RCH for the lookup of the upstream port component registers). Tested-by: Robert Richter <rrichter@amd.com> Link: https://lore.kernel.org/r/166993045621.1882361.1730100141527044744.stgit@dwillia2-xfh.jf.intel.com Reviewed-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Camerom <Jonathan.Cameron@huawei.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-12-05cxl/mem: Move devm_cxl_add_endpoint() from cxl_core to cxl_memDan Williams
tl;dr: Clean up an unnecessary export and enable cxl_test. An RCD (Restricted CXL Device), in contrast to a typical CXL device in a VH topology, obtains its component registers from the bottom half of the associated CXL host bridge RCRB (Root Complex Register Block). In turn this means that cxl_rcrb_to_component() needs to be called from devm_cxl_add_endpoint(). Presently devm_cxl_add_endpoint() is part of the CXL core, but the only user is the CXL mem module. Move it from cxl_core to cxl_mem to not only get rid of an unnecessary export, but to also enable its call out to cxl_rcrb_to_component(), in a subsequent patch, to be mocked by cxl_test. Recall that cxl_test can only mock exported symbols, and since cxl_rcrb_to_component() is itself inside the core, all callers must be outside of cxl_core to allow cxl_test to mock it. Reviewed-by: Robert Richter <rrichter@amd.com> Link: https://lore.kernel.org/r/166993045072.1882361.13944923741276843683.stgit@dwillia2-xfh.jf.intel.com Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2022-12-05genirq/msi: Provide msi_match_device_domain()Thomas Gleixner
Provide an interface to match a per device domain bus token. This allows to query which type of domain is installed for a particular domain id. Will be used for PCI to avoid frequent create/remove cycles for the MSI resp. MSI-X domains. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.738047902@linutronix.de
2022-12-05genirq/msi: Provide msi_create/free_device_irq_domain()Thomas Gleixner
Now that all prerequsites are in place, provide the actual interfaces for creating and removing per device interrupt domains. MSI device interrupt domains are created from the provided msi_domain_template which is duplicated so that it can be modified for the particular device. The name of the domain and the name of the interrupt chip are composed by "$(PREFIX)$(CHIPNAME)-$(DEVNAME)" $PREFIX: The optional prefix provided by the underlying MSI parent domain via msi_parent_ops::prefix. $CHIPNAME: The name of the irq_chip in the template $DEVNAME: The name of the device The domain is further initialized through a MSI parent domain callback which fills in the required functionality for the parent domain or domains further down the hierarchy. This initialization can fail, e.g. when the requested feature or MSI domain type cannot be supported. The domain pointer is stored in the pointer array inside of msi_device_data which is attached to the domain. The domain can be removed via the API or left for disposal via devres when the device is torn down. The API removal is useful e.g. for PCI to have seperate domains for MSI and MSI-X, which are mutually exclusive and always occupy the default domain id slot. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.678838546@linutronix.de
2022-12-05genirq/irqdomain: Add irq_domain:: Dev for per device MSI domainsThomas Gleixner
Per device domains require the device pointer of the device which instantiated the domain for some purposes. Add the pointer to struct irq_domain. It will be used in the next step which provides the infrastructure to create per device MSI domains. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.618807601@linutronix.de
2022-12-05genirq/msi: Split msi_create_irq_domain()Thomas Gleixner
Split the functionality of msi_create_irq_domain() so it can be reused for creating per device irq domains. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.559086358@linutronix.de
2022-12-05genirq/msi: Add size info to struct msi_domain_infoThomas Gleixner
To allow proper range checking especially for dynamic allocations add a size field to struct msi_domain_info. If the field is 0 then the size is unknown or unlimited (up to MSI_MAX_INDEX) to provide backwards compability. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124232325.501144862@linutronix.de