summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2023-10-28drdb: Convert to use bdev_open_by_path()Jan Kara
Convert drdb to use bdev_open_by_path(). CC: drbd-dev@lists.linbit.com Acked-by: Christoph Hellwig <hch@lst.de> Acked-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230927093442.25915-4-jack@suse.cz Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-28net: fill in MODULE_DESCRIPTION()s under drivers/net/Jakub Kicinski
W=1 builds now warn if module is built without a MODULE_DESCRIPTION(). Acked-by: Willem de Bruijn <willemb@google.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Taehee Yoo <ap420073@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-28net: fill in MODULE_DESCRIPTION()s in kuba@'s modulesJakub Kicinski
W=1 builds now warn if module is built without a MODULE_DESCRIPTION(). Fill it in for the modules I maintain. Acked-by: Kalle Valo <kvalo@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-10-28usb: gadget: uvc: Add missing initialization of ssp config descriptorShuzhen Wang
In case the uvc gadget is super speed plus, the corresponding config descriptor wasn't initialized. As a result, the host will not recognize the devices when using super speed plus connection. This patch initializes them to super speed descriptors. Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Shuzhen Wang <shuzhenwang@google.com> Link: https://lore.kernel.org/r/20231027183440.1994315-1-shuzhenwang@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-28usb: storage: set 1.50 as the lower bcdDevice for older "Super Top" ↵LihaSika
compatibility Change lower bcdDevice value for "Super Top USB 2.0 SATA BRIDGE" to match 1.50. I have such an older device with bcdDevice=1.50 and it will not work otherwise. Cc: stable@vger.kernel.org Signed-off-by: Liha Sikanen <lihasika@gmail.com> Link: https://lore.kernel.org/r/ccf7d12a-8362-4916-b3e0-f4150f54affd@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-10-27acpi: Move common tables helper functions to common libDave Jiang
Some of the routines in ACPI driver/acpi/tables.c can be shared with parsing CDAT. CDAT is a device-provided data structure that is formatted similar to a platform provided ACPI table. CDAT is used by CXL and can exist on platforms that do not use ACPI. Split out the common routine from ACPI to accommodate platforms that do not support ACPI and move that to /lib. The common routines can be built outside of ACPI if FIRMWARE_TABLES is selected. Link: https://lore.kernel.org/linux-cxl/CAJZ5v0jipbtTNnsA0-o5ozOk8ZgWnOg34m34a9pPenTyRLj=6A@mail.gmail.com/ Suggested-by: "Rafael J. Wysocki" <rafael@kernel.org> Reviewed-by: Hanjun Guo <guohanjun@huawei.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/r/169713683430.2205276.17899451119920103445.stgit@djiang5-mobl3 Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl: Add support for reading CXL switch CDAT tableDave Jiang
Add read_cdat_data() call in cxl_switch_port_probe() to allow reading of CDAT data for CXL switches. read_cdat_data() needs to be adjusted for the retrieving of the PCIe device depending on if the passed in port is endpoint or switch. Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/169713682855.2205276.6418370379144967443.stgit@djiang5-mobl3 Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl: Add checksum verification to CDAT from CXLDave Jiang
A CDAT table is available from a CXL device. The table is read by the driver and cached in software. With the CXL subsystem needing to parse the CDAT table, the checksum should be verified. Add checksum verification after the CDAT table is read from device. Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/169713682277.2205276.2687265961314933628.stgit@djiang5-mobl3 Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl: Export QTG ids from CFMWS to sysfs as qos_class attributeDave Jiang
Export the QoS Throttling Group ID from the CXL Fixed Memory Window Structure (CFMWS) under the root decoder sysfs attributes as qos_class. CXL rev3.0 9.17.1.3 CXL Fixed Memory Window Structure (CFMWS) cxl cli will use this id to match with the _DSM retrieved id for a hot-plugged CXL memory device DPA memory range to make sure that the DPA range is under the right CFMWS window. Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/169713681699.2205276.14475306324720093079.stgit@djiang5-mobl3 Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl: Add decoders_committed sysfs attribute to cxl_portDave Jiang
This attribute allows cxl-cli to determine whether there are decoders committed to a memdev. This is only a snapshot of the state, and doesn't offer any protection or serialization against a concurrent disable-region operation. Reviewed-by: Jim Harris <jim.harris@samsung.com> Suggested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/169747907439.272156.10261062080830155662.stgit@djiang5-mobl3 Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl: Add cxl_decoders_committed() helperDave Jiang
Add a helper to retrieve the number of decoders committed for the port. Replace all the open coding of the calculation with the helper. Link: https://lore.kernel.org/linux-cxl/651c98472dfed_ae7e729495@dwillia2-xfh.jf.intel.com.notmuch/ Suggested-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Jim Harris <jim.harris@samsung.com> Reviewed-by: Alison Schofield <alison.schofield@intel.com> Link: https://lore.kernel.org/r/169747906849.272156.1729290904857372335.stgit@djiang5-mobl3 Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/core/regs: Rework cxl_map_pmu_regs() to use map->dev for devmRobert Richter
struct cxl_register_map carries a @dev parameter for devm operations. Simplify the function interface to use that instead of a separate @dev argument. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20231018171713.1883517-21-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/core/regs: Rename phys_addr in cxl_map_component_regs()Robert Richter
Trivial change that renames variable phys_addr in cxl_map_component_regs() to shorten its length to keep the 80 char size limit for the line and also for consistency between the different paths. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20231018171713.1883517-20-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27PCI/AER: Unmask RCEC internal errors to enable RCH downstream port error ↵Robert Richter
handling AER corrected and uncorrectable internal errors (CIE/UIE) are masked in their corresponding mask registers per default once in power-up state. [1][2] Enable internal errors for RCECs to receive CXL downstream port errors of Restricted CXL Hosts (RCHs). [1] CXL 3.0 Spec, 12.2.1.1 - RCH Downstream Port Detected Errors [2] PCIe Base Spec r6.0, 7.8.4.3 Uncorrectable Error Mask Register, 7.8.4.6 Correctable Error Mask Register Co-developed-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-19-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27PCI/AER: Forward RCH downstream port-detected errors to the CXL.mem dev handlerRobert Richter
In Restricted CXL Device (RCD) mode a CXL device is exposed as an RCiEP, but CXL downstream and upstream ports are not enumerated and not visible in the PCIe hierarchy. [1] Protocol and link errors from these non-enumerated ports are signaled as internal AER errors, either Uncorrectable Internal Error (UIE) or Corrected Internal Errors (CIE) via an RCEC. Restricted CXL host (RCH) downstream port-detected errors have the Requester ID of the RCEC set in the RCEC's AER Error Source ID register. A CXL handler must then inspect the error status in various CXL registers residing in the dport's component register space (CXL RAS capability) or the dport's RCRB (PCIe AER extended capability). [2] Errors showing up in the RCEC's error handler must be handled and connected to the CXL subsystem. Implement this by forwarding the error to all CXL devices below the RCEC. Since the entire CXL device is controlled only using PCIe Configuration Space of device 0, function 0, only pass it there [3]. The error handling is limited to currently supported devices with the Memory Device class code set (CXL Type 3 Device, PCI_CLASS_MEMORY_CXL, 502h), handle downstream port errors in the device's cxl_pci driver. Support for other CXL Device Types (e.g. a CXL.cache Device) can be added later. To handle downstream port errors in addition to errors directed to the CXL endpoint device, a handler must also inspect the CXL RAS and PCIe AER capabilities of the CXL downstream port the device is connected to. Since CXL downstream port errors are signaled using internal errors, the handler requires those errors to be unmasked. This is subject of a follow-on patch. The reason for choosing this implementation is that the AER service driver claims the RCEC device, but does not allow it to register a custom specific handler to support CXL. Connecting the RCEC hard-wired with a CXL handler does not work, as the CXL subsystem might not be present all the time. The alternative to add an implementation to the portdrv to allow the registration of a custom RCEC error handler isn't worth doing it as CXL would be its only user. Instead, just check for an CXL RCEC and pass it down to the connected CXL device's error handler. With this approach the code can entirely be implemented in the PCIe AER driver and is independent of the CXL subsystem. The CXL driver only provides the handler. [1] CXL 3.0 spec: 9.11.8 CXL Devices Attached to an RCH [2] CXL 3.0 spec, 12.2.1.1 RCH Downstream Port-detected Errors [3] CXL 3.0 spec, 8.1.3 PCIe DVSEC for CXL Devices Co-developed-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Cc: Oliver O'Halloran <oohall@gmail.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-pci@vger.kernel.org Acked-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-18-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/pci: Disable root port interrupts in RCH modeTerry Bowman
The RCH root port contains root command AER registers that should not be enabled.[1] Disable these to prevent root port interrupts. [1] CXL 3.0 - 12.2.1.1 RCH Downstream Port-detected Errors Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-17-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/pci: Add RCH downstream port error loggingTerry Bowman
RCH downstream port error logging is missing in the current CXL driver. The missing AER and RAS error logging is needed for communicating driver error details to userspace. Update the driver to include PCIe AER and CXL RAS error logging. Add RCH downstream port error handling into the existing RCiEP handler. The downstream port error handler is added to the RCiEP error handler because the downstream port is implemented in a RCRB, is not PCI enumerable, and as a result is not directly accessible to the PCI AER root port driver. The AER root port driver calls the RCiEP handler for handling RCD errors and RCH downstream port protocol errors. Update existing RCiEP correctable and uncorrectable handlers to also call the RCH handler. The RCH handler will read the RCH AER registers, check for error severity, and if an error exists will log using an existing kernel AER trace routine. The RCH handler will also log downstream port RAS errors if they exist. Co-developed-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-16-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/pci: Map RCH downstream AER registers for logging protocol errorsTerry Bowman
The restricted CXL host (RCH) error handler will log protocol errors using AER and RAS status registers. The AER and RAS registers need to be virtually memory mapped before enabling interrupts. Create the initializer function devm_cxl_setup_parent_dport() for this when the endpoint is connected with the dport. The initialization sets up the RCH RAS and AER mappings. Add 'struct cxl_regs' to 'struct cxl_dport' for saving a pointer to the RCH downstream port's AER and RAS registers. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Co-developed-by: Robert Richter <rrichter@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20231018171713.1883517-15-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/pci: Update CXL error logging to use RAS register addressTerry Bowman
The CXL error handler currently only logs endpoint RAS status. The CXL topology includes several components providing RAS details to be logged during error handling.[1] Update the current handler's RAS logging to use a RAS register address. Also, update the error handler function names to be consistent with correctable and uncorrectable RAS. This will allow for adding support to log other CXL component's RAS details in the future. [1] CXL3.0 Table 8-22 CXL_Capability_ID Assignment Co-developed-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-14-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27PCI/AER: Refactor cper_print_aer() for use by CXL driver moduleTerry Bowman
The CXL driver plans to use cper_print_aer() for logging restricted CXL host (RCH) AER errors. cper_print_aer() is not currently exported and therefore not usable by the CXL drivers built as loadable modules. Export the cper_print_aer() function. Use the EXPORT_SYMBOL_NS_GPL() variant to restrict the export to CXL drivers. The CONFIG_ACPI_APEI_PCIEAER kernel config is currently used to enable cper_print_aer(). cper_print_aer() logs the AER registers and is useful in PCIE AER logging outside of APEI. Remove the CONFIG_ACPI_APEI_PCIEAER dependency to enable cper_print_aer(). The cper_print_aer() function name implies CPER specific use but is useful in non-CPER cases as well. Rename cper_print_aer() to pci_print_aer(). Also, update cxl_core to import CXL namespace imports. Co-developed-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Cc: Mahesh J Salgaonkar <mahesh@linux.ibm.com> Cc: Oliver O'Halloran <oohall@gmail.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: linux-pci@vger.kernel.org Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-13-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/pci: Add RCH downstream port AER register discoveryRobert Richter
Restricted CXL host (RCH) downstream port AER information is not currently logged while in the error state. One problem preventing the error logging is the AER and RAS registers are not accessible. The CXL driver requires changes to find RCH downstream port AER and RAS registers for purpose of error logging. RCH downstream ports are not enumerated during a PCI bus scan and are instead discovered using system firmware, ACPI in this case.[1] The downstream port is implemented as a Root Complex Register Block (RCRB). The RCRB is a 4k memory block containing PCIe registers based on the PCIe root port.[2] The RCRB includes AER extended capability registers used for reporting errors. Note, the RCH's AER Capability is located in the RCRB memory space instead of PCI configuration space, thus its register access is different. Existing kernel PCIe AER functions can not be used to manage the downstream port AER capabilities and RAS registers because the port was not enumerated during PCI scan and the registers are not PCI config accessible. Discover RCH downstream port AER extended capability registers. Use MMIO accesses to search for extended AER capability in RCRB register space. [1] CXL 3.0 Spec, 9.11.2 - System Firmware View of CXL 1.1 Hierarchy [2] CXL 3.0 Spec, 8.2.1.1 - RCH Downstream Port RCRB Co-developed-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-12-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/port: Remove Component Register base address from struct cxl_portRobert Richter
The Component Register base address @component_reg_phys is no longer used after the rework of the Component Register setup which now uses struct member @reg_map instead. Remove the base address. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-10-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/pci: Remove Component Register base address from struct cxl_dev_stateRobert Richter
The Component Register base address @component_reg_phys is no longer used after the rework of the Component Register setup which now uses struct member @reg_map instead. Remove the base address. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-9-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/hdm: Use stored Component Register mappings to map HDM decoder capabilityRobert Richter
Now, that the Component Register mappings are stored, use them to enable and map the HDM decoder capabilities. The Component Registers do not need to be probed again for this, remove probing code. The HDM capability applies to Endpoints, USPs and VH Host Bridges. The Endpoint's component register mappings are located in the cxlds and else in the port's structure. Duplicate the cxlds->reg_map in port->reg_map for endpoint ports. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> [rework to drop cxl_port_get_comp_map()] Link: https://lore.kernel.org/r/20231018171713.1883517-8-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/pci: Store the endpoint's Component Register mappings in struct ↵Robert Richter
cxl_dev_state Same as for ports and dports, also store the endpoint's Component Register mappings, use struct cxl_dev_state for that. Keep the Component Register base address @component_reg_phys a bit to not break functionality. It will be removed after the transition in a later patch. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231018171713.1883517-7-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/port: Pre-initialize component register mappingsRobert Richter
The component registers of a component may not exist and cxl_setup_comp_regs() will fail for that reason. In another case, Software may not use and set those registers up. cxl_setup_comp_regs() is then called with a base address of CXL_RESOURCE_NONE. Both are valid cases, but the function returns without initializing the register map. Now, a missing component register block is not necessarily a reason to fail (feature is optional or its existence checked later). Change cxl_setup_comp_regs() to also use components with the component register block missing. Thus, always initialize struct cxl_register_map with valid values, set @dev and make @resource CXL_RESOURCE_NONE. The change is in preparation of follow-on patches. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20231018171713.1883517-6-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/port: Rename @comp_map to @reg_map in struct cxl_register_mapRobert Richter
Name the field @reg_map, because @reg_map->host will be used for mapping operations beyond component registers (i.e. AER registers). This is valid for all occurrences of @comp_map. Change them all. Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20231018171713.1883517-5-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/port: Fix @host confusion in cxl_dport_setup_regs()Dan Williams
commit 5d2ffbe4b81a ("cxl/port: Store the downstream port's Component Register mappings in struct cxl_dport") ...moved the dport component registers from a raw component_reg_phys passed in at dport instantiation time to a 'struct cxl_register_map' populated with both the component register data *and* the "host" device for mapping operations. While typical CXL switch dports are mapped by their associated 'struct cxl_port', an RCH host bridge dport registered by cxl_acpi needs to wait until the cxl_mem driver makes the attachment to map the registers. This is because there are no intervening 'struct cxl_port' instances between the root cxl_port and the endpoint port in an RCH topology. For now just mark the host as NULL in the RCH dport case until code that needs to map the dport registers arrives. This patch is not flagged for -stable since nothing in the current driver uses the dport->comp_map. Now, I am slightly uneasy that cxl_setup_comp_regs() sets map->host to a wrong value and then cxl_dport_setup_regs() fixes it up, but the alternatives I came up with are more messy. For example, adding an @logdev to 'struct cxl_register_map' that the dev_printk()s can fall back to when @host is NULL. I settled on "post-fixup+comment" since it is only RCH dports that have this special case where register probing is split between a host-bridge RCRB lookup and when cxl_mem_probe() does the association of the cxl_memdev and endpoint port. [moved rename of @comp_map to @reg_map into next patch] Fixes: 5d2ffbe4b81a ("cxl/port: Store the downstream port's Component Register mappings in struct cxl_dport") Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20231018171713.1883517-4-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27cxl/core/regs: Rename @dev to @host in struct cxl_register_mapRobert Richter
The primary role of @dev is to host the mappings for devm operations. @dev is too ambiguous as a name. I.e. when does @dev refer to the 'struct device *' instance that the registers belong, and when does @dev refer to the 'struct device *' instance hosting the mapping for devm operations? Clarify the role of @dev in cxl_register_map by renaming it to @host. Also, rename local variables to 'host' where map->host is used. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20231018171713.1883517-3-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-28platform/chrome: cros_ec_lpc: Separate host command and irq disableLalith Rajendran
Both cros host command and irq disable were moved to suspend prepare stage from late suspend recently. This is causing EC to report MKBP event timeouts during suspend stress testing. When the MKBP event timeouts happen during suspend, subsequent wakeup of AP by EC using MKBP doesn't happen properly. Move the irq disabling part back to late suspend stage which is a general suggestion from the suspend kernel documentaiton to do irq disable as late as possible. Fixes: 4b9abbc132b8 ("platform/chrome: cros_ec_lpc: Move host command to prepare/complete") Signed-off-by: Lalith Rajendran <lalithkraj@chromium.org> Link: https://lore.kernel.org/r/20231027160221.v4.1.I1725c3ed27eb7cd9836904e49e8bfa9fb0200a97@changeid Signed-off-by: Tzung-Bi Shih <tzungbi@kernel.org>
2023-10-27cxl/port: Fix delete_endpoint() vs parent unregistration raceDan Williams
The CXL subsystem, at cxl_mem ->probe() time, establishes a lineage of ports (struct cxl_port objects) between an endpoint and the root of a CXL topology. Each port including the endpoint port is attached to the cxl_port driver. Given that setup, it follows that when either any port in that lineage goes through a cxl_port ->remove() event, or the memdev goes through a cxl_mem ->remove() event. The hierarchy below the removed port, or the entire hierarchy if the memdev is removed needs to come down. The delete_endpoint() callback is careful to check whether it is being called to tear down the hierarchy, or if it is only being called to teardown the memdev because an ancestor port is going through ->remove(). That care needs to take the device_lock() of the endpoint's parent. Which requires 2 bugs to be fixed: 1/ A reference on the parent is needed to prevent use-after-free scenarios like this signature: BUG: spinlock bad magic on CPU#0, kworker/u56:0/11 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS edk2-20230524-3.fc38 05/24/2023 Workqueue: cxl_port detach_memdev [cxl_core] RIP: 0010:spin_bug+0x65/0xa0 Call Trace: do_raw_spin_lock+0x69/0xa0 __mutex_lock+0x695/0xb80 delete_endpoint+0xad/0x150 [cxl_core] devres_release_all+0xb8/0x110 device_unbind_cleanup+0xe/0x70 device_release_driver_internal+0x1d2/0x210 detach_memdev+0x15/0x20 [cxl_core] process_one_work+0x1e3/0x4c0 worker_thread+0x1dd/0x3d0 2/ In the case of RCH topologies, the parent device that needs to be locked is not always @port->dev as returned by cxl_mem_find_port(), use endpoint->dev.parent instead. Fixes: 8dd2bc0f8e02 ("cxl/mem: Add the cxl_mem driver") Cc: <stable@vger.kernel.org> Reported-by: Robert Richter <rrichter@amd.com> Closes: http://lore.kernel.org/r/20231018171713.1883517-2-rrichter@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2023-10-27Merge tag 'clk-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux Pull clk fixes from Stephen Boyd: "Three fixes, one for the clk framework and two for clk drivers: - Avoid an oops in possible_parent_show() by checking for no parent properly when a DT index based lookup is used - Handle errors returned from divider_ro_round_rate() in clk_stm32_composite_determine_rate() - Fix clk_ops::determine_rate() implementation of socfpga's gateclk_ops that was ruining uart output because the divider was forgotten about" * tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: clk: stm32: Fix a signedness issue in clk_stm32_composite_determine_rate() clk: Sanitize possible_parent_show to Handle Return Value of of_clk_get_parent_name clk: socfpga: gate: Account for the divider in determine_rate
2023-10-27clk: Fix clk gate kunit test on big-endian CPUsStephen Boyd
The clk gate kunit test checks that the implementation of the basic clk gate reads and writes the proper bits in an MMIO register. The implementation of the basic clk gate type uses writel() and readl() which operate on little-endian registers. This test fails on big-endian CPUs because the clk gate implementation writes to 'fake_reg' with writel(), which converts the value to be written to little-endian before storing the value in the fake register. When the test checks the bits in the fake register on a big-endian machine it falsely assumes the format of the register is also big-endian, when it is really always little-endian. Suffice to say things don't work very well. Mark 'fake_reg' as __le32 and push through endian accessor fixes wherever the value is inspected to make this test endian agnostic. There's a CLK_GATE_BIG_ENDIAN flag for big-endian MMIO devices, which this test isn't using. A follow-up patch will test with and without that flag. Reported-by: Boqun Feng <boqun.feng@gmail.com> Closes: https://lore.kernel.org/r/ZTLH5o0GlFBYsAHq@boqun-archlinux Tested-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Stephen Boyd <sboyd@kernel.org> Link: https://lore.kernel.org/r/20231027225821.95833-1-sboyd@kernel.org
2023-10-27Merge tag 'ata-6.6-final' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata Pull ATA fix from Damien Le Moal: "A single patch to fix a regression introduced by the recent suspend/resume fixes. The regression is that ATA disks are not stopped on system shutdown, which is not recommended and increases the disks SMART counters for unclean power off events. This patch fixes this by refining the recent rework of the scsi device manage_xxx flags" * tag 'ata-6.6-final' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata: scsi: sd: Introduce manage_shutdown device flag
2023-10-27Merge tag 'platform-drivers-x86-v6.6-6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86 Pull x86 platform driver fix from Hans de Goede: "A single patch to extend the AMD PMC driver DMI quirk list for laptops which need special handling to avoid NVME s2idle suspend/resume errors" * tag 'platform-drivers-x86-v6.6-6' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: platform/x86: Add s2idle quirk for more Lenovo laptops
2023-10-27Input: cyttsp5 - add handling for vddio regulatorLin, Meng-Bo
The Cypress touchscreen controllers are often used with external pull-up for the interrupt line and the I2C lines, so we might need to enable a regulator to bring the lines into usable state. Otherwise, this might cause spurious interrupts and reading from I2C will fail. Implement support for a "vddio-supply" that is enabled by the cyttsp5 driver so that the regulator gets enabled when needed. Signed-off-by: Lin, Meng-Bo <linmengbo0689@protonmail.com> Acked-by: Alistair Francis <alistair@alistair23.me> Link: https://lore.kernel.org/r/20221117190507.87535-3-linmengbo0689@protonmail.com Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2023-10-27net: pcs: xpcs: Add 2500BASE-X case in get state for XPCS driversRaju Lakkaraju
Add DW_2500BASEX case in xpcs_get_state( ) to update speed, duplex and pause Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20231027044306.291250-1-Raju.Lakkaraju@microchip.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27net: mana: Use xdp_set_features_flag instead of direct assignmentKonstantin Taranov
This patch uses a helper function for assignment of xdp_features. This change simplifies backports. Signed-off-by: Konstantin Taranov <kotaranov@microsoft.com> Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Link: https://lore.kernel.org/r/1698430011-21562-1-git-send-email-haiyangz@microsoft.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27vxlan: Cleanup IFLA_VXLAN_PORT_RANGE entry in vxlan_get_size()Benjamin Poirier
This patch is basically a followup to commit 4e4b1798cc90 ("vxlan: Add missing entries to vxlan_get_size()"). All of the attributes in vxlan_get_size() appear in the same order that they are filled in vxlan_fill_info() except for IFLA_VXLAN_PORT_RANGE. For consistency, move that entry to match its order and add a comment, like for all other entries. Signed-off-by: Benjamin Poirier <bpoirier@nvidia.com> Link: https://lore.kernel.org/r/20231027184410.236671-1-bpoirier@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27iavf: delete the iavf client interfaceMichal Schmidt
The iavf client interface was added in 2017 by commit ed0e894de7c1 ("i40evf: add client interface"), but there have never been any in-tree callers. It's not useful for future development either. The Intel out-of-tree iavf and irdma drivers instead use an auxiliary bus, which is a better solution. Remove the iavf client interface code. Also gone are the client_task work and the client_lock mutex. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231027175941.1340255-9-jacob.e.keller@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27iavf: add a common function for undoing the interrupt schemeMichal Schmidt
Add a new function iavf_free_interrupt_scheme that does the inverse of iavf_init_interrupt_scheme. Symmetry is nice. And there will be three callers already. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231027175941.1340255-8-jacob.e.keller@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27iavf: use unregister_netdevMichal Schmidt
Use unregister_netdev, which takes rtnl_lock for us. We don't have to check the reg_state under rtnl_lock. There's nothing to race with. We have just cancelled the finish_config work. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231027175941.1340255-7-jacob.e.keller@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27iavf: rely on netdev's own registered stateMichal Schmidt
The information whether a netdev has been registered is already present in the netdev itself. There's no need for a driver flag with the same meaning. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231027175941.1340255-6-jacob.e.keller@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27iavf: fix the waiting time for initial resetMichal Schmidt
Every time I create VFs on ice, I receive at least one "Device is still in reset (-16), retrying" message per VF. It recovers fine, but typical usecases should not trigger scary-looking messages. The waiting for reset is too short. It makes no sense to check every 10 microseconds. Typical reset waiting times are at least tens of milliseconds and can be several seconds. I suspect the polling interval was meant to be 10 milliseconds all along. IAVF_RESET_WAIT_COMPLETE_COUNT is defined as 2000, so the total waiting time could be over 20 seconds. I have seen resets take 5 seconds (with 128 VFs on ice). The added benefit of not triggering the "Device is still in reset" path is that we avoid going through the __IAVF_INIT_FAILED state, which would take a full second before retrying. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231027175941.1340255-5-jacob.e.keller@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27iavf: in iavf_down, don't queue watchdog_task if comms failedMichal Schmidt
The reason for queueing watchdog_task is to have it process the aq_required flags that are being set here. If comms failed, there's nothing to do, so return early. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231027175941.1340255-4-jacob.e.keller@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27iavf: simplify mutex_trylock+sleep loopsMichal Schmidt
This pattern appears in two places in the iavf source code: while (!mutex_trylock(...)) usleep_range(...); That's just mutex_lock with extra steps. The pattern is a leftover from when iavf used bit flags instead of mutexes for locking. Commit 5ac49f3c2702 ("iavf: use mutexes for locking of critical sections") replaced test_and_set_bit with !mutex_trylock, preserving the pattern. Simplify it to mutex_lock. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231027175941.1340255-3-jacob.e.keller@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27iavf: fix comments about old bit locksMichal Schmidt
Bit lock __IAVF_IN_CRITICAL_TASK does not exist anymore since commit 5ac49f3c2702 ("iavf: use mutexes for locking of critical sections"). Adjust the comments accordingly. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20231027175941.1340255-2-jacob.e.keller@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27ipvlan: properly track tx_errorsEric Dumazet
Both ipvlan_process_v4_outbound() and ipvlan_process_v6_outbound() increment dev->stats.tx_errors in case of errors. Unfortunately there are two issues : 1) ipvlan_get_stats64() does not propagate dev->stats.tx_errors to user. 2) Increments are not atomic. KCSAN would complain eventually. Use DEV_STATS_INC() to not miss an update, and change ipvlan_get_stats64() to copy the value back to user. Fixes: 2ad7bf363841 ("ipvlan: Initial check-in of the IPVLAN driver.") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Mahesh Bandewar <maheshb@google.com> Link: https://lore.kernel.org/r/20231026131446.3933175-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27netdevsim: Block until all devices are releasedIdo Schimmel
Like other buses, devices on the netdevsim bus have a release callback that is invoked when the reference count of the device drops to zero. However, unlike other buses such as PCI, the release callback is not necessarily built into the kernel, as netdevsim can be built as a module. The above is problematic as nothing prevents the module from being unloaded before the release callback has been invoked, which can happen asynchronously. One such example can be found in commit a380687200e0 ("devlink: take device reference for devlink object") where devlink calls put_device() from an RCU callback. The issue is not theoretical and the reproducer in [1] can reliably crash the kernel. The conclusion of this discussion was that the issue should be solved in netdevsim, which is what this patch is trying to do. Add a reference count that is increased when a device is added to the bus and decreased when a device is released. Signal a completion when the reference count drops to zero and wait for the completion when unloading the module so that the module will not be unloaded before all the devices were released. The reference count is initialized to one so that completion is only signaled when unloading the module. With this patch, the reproducer in [1] no longer crashes the kernel. [1] https://lore.kernel.org/netdev/20230619125015.1541143-2-idosch@nvidia.com/ Fixes: a380687200e0 ("devlink: take device reference for devlink object") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Link: https://lore.kernel.org/r/20231026083343.890689-1-idosch@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-10-27nfp: using napi_build_skb() to replace build_skb()Fei Qin
The napi_build_skb() can reuse the skb in skb cache per CPU or can allocate skbs in bulk, which helps improve the performance. Signed-off-by: Fei Qin <fei.qin@corigine.com> Signed-off-by: Louis Peens <louis.peens@corigine.com> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Link: https://lore.kernel.org/r/20231026080058.22810-1-louis.peens@corigine.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>