summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2023-12-15hwrng: stm32 - Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15hwrng: omap - Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15hwrng: npcm - Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15hwrng: n2 - Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15hwrng: mxc - Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15hwrng: ks-sa - Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15hwrng: ingenic - Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15hwrng: exynos - Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Acked-by: Lukasz Stelmach <l.stelmach@samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15hwrng: cctrng - Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15hwrng: atmel - Convert to platform remove callback returning voidUwe Kleine-König
The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is ignored (apart from emitting a warning) and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new(), which already returns void. Eventually after all drivers are converted, .remove_new() will be renamed to .remove(). Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15hwrng: virtio - Remove usage of the deprecated ida_simple_xx() APIChristophe JAILLET
ida_alloc() and ida_free() should be preferred to the deprecated ida_simple_get() and ida_simple_remove(). This is less verbose. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: hisilicon/sec2 - optimize the error return processChenghai Huang
Add the printf of an error message and optimized the handling process of ret. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: hisilicon/qm - delete a dbg functionChenghai Huang
Deleted a dbg function because this function has the risk of address leakage. In addition, this function is only used for debugging in the early stage and is not required in the future. Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15dmaengine: idxd: Add support for device/wq defaultsTom Zanussi
Add a load_device_defaults() function pointer to struct idxd_driver_data, which if defined, will be called when an idxd device is probed and will allow the idxd device to be configured with default values. The load_device_defaults() function is passed an idxd device to work with to set specific device attributes. Also add a load_device_defaults() implementation IAA devices; future patches would add default functions for other device types such as DSA. The way idxd device probing works, if the device configuration is valid at that point e.g. at least one workqueue and engine is properly configured then the device will be enabled and ready to go. The IAA implementation, idxd_load_iaa_device_defaults(), configures a single workqueue (wq0) for each device with the following default values: mode "dedicated" threshold 0 size Total WQ Size from WQCAP priority 10 type IDXD_WQT_KERNEL group 0 name "iaa_crypto" driver_name "crypto" Note that this now adds another configuration step for any users that want to configure their own devices/workqueus with something different in that they'll first need to disable (in the case of IAA) wq0 and the device itself before they can set their own attributes and re-enable, since they've been already been auto-enabled. Note also that in order for the new configuration to be applied to the deflate-iaa crypto algorithm the iaa_crypto module needs to unregister the old version, which is accomplished by removing the iaa_crypto module, and re-registering it with the new configuration by reinserting the iaa_crypto module. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: iaa - Add IAA Compression Accelerator statsTom Zanussi
Add support for optional debugfs statistics support for the IAA Compression Accelerator. This is enabled by the kernel config item: CRYPTO_DEV_IAA_CRYPTO_STATS When enabled, the IAA crypto driver will generate statistics which can be accessed at /sys/kernel/debug/iaa-crypto/. See Documentation/driver-api/crypto/iax/iax-crypto.rst for details. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: iaa - Add irq support for the crypto async interfaceTom Zanussi
The existing iaa crypto async support provides an implementation that satisfies the interface but does so in a synchronous manner - it fills and submits the IDXD descriptor and then waits for it to complete before returning. This isn't a problem at the moment, since all existing callers (e.g. zswap) wrap any asynchronous callees in a synchronous wrapper anyway. This change makes the iaa crypto async implementation truly asynchronous: it fills and submits the IDXD descriptor, then returns immediately with -EINPROGRESS. It also sets the descriptor's 'request completion irq' bit and sets up a callback with the IDXD driver which is called when the operation completes and the irq fires. The existing callers such as zswap use synchronous wrappers to deal with -EINPROGRESS and so work as expected without any changes. This mode can be enabled by writing 'async_irq' to the sync_mode iaa_crypto driver attribute: echo async_irq > /sys/bus/dsa/drivers/crypto/sync_mode Async mode without interrupts (caller must poll) can be enabled by writing 'async' to it: echo async > /sys/bus/dsa/drivers/crypto/sync_mode The default sync mode can be enabled by writing 'sync' to it: echo sync > /sys/bus/dsa/drivers/crypto/sync_mode The sync_mode value setting at the time the IAA algorithms are registered is captured in each algorithm's crypto_ctx and used for all compresses and decompresses when using a given algorithm. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: iaa - Add support for deflate-iaa compression algorithmTom Zanussi
This patch registers the deflate-iaa deflate compression algorithm and hooks it up to the IAA hardware using the 'fixed' compression mode introduced in the previous patch. Because the IAA hardware has a 4k history-window limitation, only buffers <= 4k, or that have been compressed using a <= 4k history window, are technically compliant with the deflate spec, which allows for a window of up to 32k. Because of this limitation, the IAA fixed mode deflate algorithm is given its own algorithm name, 'deflate-iaa'. With this change, the deflate-iaa crypto algorithm is registered and operational, and compression and decompression operations are fully enabled following the successful binding of the first IAA workqueue to the iaa_crypto sub-driver. when there are no IAA workqueues bound to the driver, the IAA crypto algorithm can be unregistered by removing the module. A new iaa_crypto 'verify_compress' driver attribute is also added, allowing the user to toggle compression verification. If set, each compress will be internally decompressed and the contents verified, returning error codes if unsuccessful. This can be toggled with 0/1: echo 0 > /sys/bus/dsa/drivers/crypto/verify_compress The default setting is '1' - verify all compresses. The verify_compress value setting at the time the algorithm is registered is captured in the algorithm's crypto_ctx and used for all compresses when using the algorithm. [ Based on work originally by George Powley, Jing Lin and Kyung Min Park ] Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: iaa - Add compression mode management along with fixed modeTom Zanussi
Define an in-kernel API for adding and removing compression modes, which can be used by kernel modules or other kernel code that implements IAA compression modes. Also add a separate file, iaa_crypto_comp_fixed.c, containing huffman tables generated for the IAA 'fixed' compression mode. Future compression modes can be added in a similar fashion. One or more crypto compression algorithms will be created for each compression mode, each of which can be selected as the compression algorithm to be used by a particular facility. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: iaa - Add per-cpu workqueue table with rebalancingTom Zanussi
The iaa compression/decompression algorithms in later patches need a way to retrieve an appropriate IAA workqueue depending on how close the associated IAA device is to the current cpu. For this purpose, add a per-cpu array of workqueues such that an appropriate workqueue can be retrieved by simply accessing the per-cpu array. Whenever a new workqueue is bound to or unbound from the iaa_crypto driver, the available workqueues are 'rebalanced' such that work submitted from a particular CPU is given to the most appropriate workqueue available. There currently isn't any way for the user to tweak the way this is done internally - if necessary, knobs can be added later for that purpose. Current best practice is to configure and bind at least one workqueue for each IAA device, but as long as there is at least one workqueue configured and bound to any IAA device in the system, the iaa_crypto driver will work, albeit most likely not as efficiently. [ Based on work originally by George Powley, Jing Lin and Kyung Min Park ] Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: iaa - Add Intel IAA Compression Accelerator crypto driver coreTom Zanussi
The Intel Analytics Accelerator (IAA) is a hardware accelerator that provides very high thoughput compression/decompression compatible with the DEFLATE compression standard described in RFC 1951, which is the compression/decompression algorithm exported by this module. Users can select IAA compress/decompress acceleration by specifying one of the deflate-iaa* algorithms as the compression algorithm to use by whatever facility allows asynchronous compression algorithms to be selected. For example, zswap can select the IAA fixed deflate algorithm 'deflate-iaa' via: # echo deflate-iaa > /sys/module/zswap/parameters/compressor This patch adds iaa_crypto as an idxd sub-driver and tracks iaa devices and workqueues as they are probed or removed. [ Based on work originally by George Powley, Jing Lin and Kyung Min Park ] Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15dmaengine: idxd: add callback support for iaa cryptoTom Zanussi
Create a lightweight callback interface to allow idxd sub-drivers to be notified when work sent to idxd wqs has completed. For a sub-driver to be notified of work completion, it needs to: - Set the descriptor's 'Request Completion Interrupt' (IDXD_OP_FLAG_RCI) - Set the sub-driver desc_complete() callback when registering the sub-driver e.g.: struct idxd_device_driver my_drv = { .probe = my_probe, .desc_complete = my_complete, } - Set the sub-driver-specific context in the sub-driver's descriptor e.g: idxd_desc->crypto.req = req; idxd_desc->crypto.tfm = tfm; idxd_desc->crypto.src_addr = src_addr; idxd_desc->crypto.dst_addr = dst_addr; When the work completes and the completion irq fires, idxd will invoke the desc_complete() callback with pointers to the descriptor, context, and completion_type. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15dmaengine: idxd: Add wq private data accessorsTom Zanussi
Add the accessors idxd_wq_set_private() and idxd_wq_get_private() allowing users to set and retrieve a private void * associated with an idxd_wq. The private data is stored in the idxd_dev.conf_dev associated with each idxd_wq. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15dmaengine: idxd: Export wq resource management functionsTom Zanussi
To allow idxd sub-drivers to access the wq resource management functions, export them. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15dmaengine: idxd: Export descriptor management functionsTom Zanussi
To allow idxd sub-drivers to access the descriptor management functions, export them. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15dmaengine: idxd: Rename drv_enable/disable_wq to idxd_drv_enable/disable_wq, ↵Tom Zanussi
and export Rename drv_enable_wq and drv_disable_wq to idxd_drv_enable_wq and idxd_drv_disable_wq respectively, so that they're no longer too generic to be exported. This also matches existing naming within the idxd driver. And to allow idxd sub-drivers to enable and disable wqs, export them. Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15dmaengine: idxd: add external module driver support for dsa_bus_typeDave Jiang
Add support to allow an external driver to be registered to the dsa_bus_type and also auto-loaded. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: starfive - Fix dev_err_probe return errorJia Jie Ho
Current dev_err_probe will return 0 instead of proper error code if driver failed to get irq number. Fix the return code. Signed-off-by: Jia Jie Ho <jiajie.ho@starfivetech.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15crypto: starfive - Remove unneeded NULL checksJia Jie Ho
NULL check before kfree_sensitive function is not needed. Signed-off-by: Jia Jie Ho <jiajie.ho@starfivetech.com> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202311301702.LxswfETY-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-12-15ACPI: utils: Refine acpi_handle_list_equal() slightlyRafael J. Wysocki
It is somewhat better to use the size of the first array element for computing the size of the entire array than to rely on the array element data type definition knowledge and the former is also consistent with the array allocation in acpi_evaluate_reference(), so modify the code accordingly. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-15ACPI: utils: Return bool from acpi_evaluate_reference()Rafael J. Wysocki
There are only 4 users of acpi_evaluate_reference() and none of them actually cares about the reason why it fails. All of them are only interested in whether or not it is successful, so it can return a bool value indicating that. Modify acpi_evaluate_reference() as per the observation above and update its callers accordingly so as to get rid of useless code and local variables. The observable behavior of the kernel is not expected to change after this modification of the code. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-15ACPI: utils: Rearrange in acpi_evaluate_reference()Rafael J. Wysocki
The code in acpi_evaluate_reference() can be improved in some ways without changing its observable behavior. Among other things: * None of the local variables in that function except for buffer needs to be initialized. * The element local variable is only used in the for () loop block, so it can be defined there. * Multiple checks can be combined. * Code duplication related to error handling can be eliminated. * Redundant inner parens can be dropped. Modify the function as per the above. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-15net: phy: add Rust Asix PHY driverFUJITA Tomonori
This is the Rust implementation of drivers/net/phy/ax88796b.c. The features are equivalent. You can choose C or Rust version kernel configuration. Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com> Reviewed-by: Trevor Gross <tmgross@umich.edu> Reviewed-by: Benno Lossin <benno.lossin@proton.me> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-12-15rust: core abstractions for network PHY driversFUJITA Tomonori
This patch adds abstractions to implement network PHY drivers; the driver registration and bindings for some of callback functions in struct phy_driver and many genphy_ functions. This feature is enabled with CONFIG_RUST_PHYLIB_ABSTRACTIONS=y. This patch enables unstable const_maybe_uninit_zeroed feature for kernel crate to enable unsafe code to handle a constant value with uninitialized data. With the feature, the abstractions can initialize a phy_driver structure with zero easily; instead of initializing all the members by hand. It's supposed to be stable in the not so distant future. Link: https://github.com/rust-lang/rust/pull/116218 Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-12-15drm/i915: Use kmap_local_page() in gem/i915_gem_execbuffer.cZhao Liu
The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the calls from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults and preemption disables. In i915_gem_execbuffer.c, eb->reloc_cache.vaddr is mapped by kmap_atomic() in eb_relocate_entry(), and is unmapped by kunmap_atomic() in reloc_cache_reset(). And this mapping/unmapping occurs in two places: one is in eb_relocate_vma(), and another is in eb_relocate_vma_slow(). The function eb_relocate_vma() or eb_relocate_vma_slow() doesn't need to disable pagefaults and preemption during the above mapping/ unmapping. So it can simply use kmap_local_page() / kunmap_local() that can instead do the mapping / unmapping regardless of the context. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com Suggested-by: Ira Weiny <ira.weiny@intel.com> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231203132947.2328805-10-zhao1.liu@linux.intel.com
2023-12-15drm/i915: Use kmap_local_page() in i915_cmd_parser.cZhao Liu
The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults and preemption disables. There're 2 reasons why function copy_batch() doesn't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. In i915_cmd_parser.c, copy_batch() calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, copy_batch() is a function where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com Suggested-by: Dave Hansen <dave.hansen@intel.com> Suggested-by: Ira Weiny <ira.weiny@intel.com> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231203132947.2328805-9-zhao1.liu@linux.intel.com
2023-12-15drm/i915: Use memcpy_from_page() in gt/uc/intel_uc_fw.cZhao Liu
The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gt/uc/intel_us_fw.c, the function intel_uc_fw_copy_rsa() just use the mapping to do memory copy so it doesn't need to disable pagefaults and preemption for mapping. Thus the local mapping without atomic context (not disable pagefaults / preemption) is enough. Therefore, intel_uc_fw_copy_rsa() is a function where the use of memcpy_from_page() with kmap_local_page() in place of memcpy() with kmap_atomic() is correctly suited. Convert the calls of memcpy() with kmap_atomic() / kunmap_atomic() to memcpy_from_page() which uses local mapping to copy. [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com/T/#u Suggested-by: Ira Weiny <ira.weiny@intel.com> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231203132947.2328805-8-zhao1.liu@linux.intel.com
2023-12-15drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_context.cZhao Liu
The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption. With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gem/selftests/i915_gem_context.c, functions cpu_fill() and cpu_check() mainly uses mapping to flush cache and check/assign the value. There're 2 reasons why cpu_fill() and cpu_check() don't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. cpu_fill() and cpu_check() call drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, cpu_fill() and cpu_check() are functions where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com Suggested-by: Dave Hansen <dave.hansen@intel.com> Suggested-by: Ira Weiny <ira.weiny@intel.com> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231203132947.2328805-7-zhao1.liu@linux.intel.com
2023-12-15drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_coherency.cZhao Liu
The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration).. With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gem/selftests/i915_gem_coherency.c, functions cpu_set() and cpu_get() mainly uses mapping to flush cache and assign the value. There're 2 reasons why cpu_set() and cpu_get() don't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. cpu_set() and cpu_get() call drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, cpu_set() and cpu_get() are functions where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com Suggested-by: Dave Hansen <dave.hansen@intel.com> Suggested-by: Ira Weiny <ira.weiny@intel.com> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231203132947.2328805-6-zhao1.liu@linux.intel.com
2023-12-15drm/i915: Use kmap_local_page() in gem/selftests/huge_pages.cZhao Liu
The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gem/selftests/huge_pages.c, function __cpu_check_shmem() mainly uses mapping to flush cache and check the value. There're 2 reasons why __cpu_check_shmem() doesn't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. Function __cpu_check_shmem() calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, __cpu_check_shmem() is a function where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com Suggested-by: Dave Hansen <dave.hansen@intel.com> Suggested-by: Ira Weiny <ira.weiny@intel.com> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231203132947.2328805-5-zhao1.liu@linux.intel.com
2023-12-15drm/i915: Use kmap_local_page() in gem/i915_gem_shmem.cZhao Liu
The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1]. The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gem/i915_gem_shmem.c, the function shmem_pwrite() need to disable pagefault to eliminate the potential recursion fault[2]. But here __copy_from_user_inatomic() doesn't need to disable preemption and local mapping is valid for sched in/out. So it can use kmap_local_page() / kunmap_local() with pagefault_disable() / pagefault_enable() to replace atomic mapping. [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com [2]: https://patchwork.freedesktop.org/patch/295840/ Suggested-by: Ira Weiny <ira.weiny@intel.com> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231203132947.2328805-4-zhao1.liu@linux.intel.com
2023-12-15drm/i915: Use memcpy_[from/to]_page() in gem/i915_gem_pyhs.cZhao Liu
The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() + memcpy() to memcpy_[from/to]_page(), which use kmap_local_page() to build local mapping and then do memcpy(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults and preemption disables. In drm/i915/gem/i915_gem_phys.c, the functions i915_gem_object_get_pages_phys() and i915_gem_object_put_pages_phys() don't need to disable pagefaults and preemption for mapping because of 2 reasons: 1. The flush operation is safe. In drm/i915/gem/i915_gem_object.c, i915_gem_object_get_pages_phys() and i915_gem_object_put_pages_phys() calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, i915_gem_object_get_pages_phys() and i915_gem_object_put_pages_phys() are two functions where the uses of local mappings in place of atomic mappings are correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() + memcpy() to memcpy_from_page() and memcpy_to_page(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com Suggested-by: Dave Hansen <dave.hansen@intel.com> Suggested-by: Ira Weiny <ira.weiny@intel.com> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231203132947.2328805-3-zhao1.liu@linux.intel.com
2023-12-15drm/i915: Use kmap_local_page() in gem/i915_gem_object.cZhao Liu
The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults and preemption disables. There're 2 reasons why i915_gem_object_read_from_page_kmap() doesn't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. In drm/i915/gem/i915_gem_object.c, i915_gem_object_read_from_page_kmap() calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, i915_gem_object_read_from_page_kmap() is a function where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). And remove the redundant variable that stores the address of the mapped page since kunmap_local() can accept any pointer within the page. [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com Suggested-by: Dave Hansen <dave.hansen@intel.com> Suggested-by: Ira Weiny <ira.weiny@intel.com> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231203132947.2328805-2-zhao1.liu@linux.intel.com
2023-12-15net: stmmac: don't create a MDIO bus if unnecessaryAndrew Halaney
Currently a MDIO bus is created if the devicetree description is either: 1. Not fixed-link 2. fixed-link but contains a MDIO bus as well The "1" case above isn't always accurate. If there's a phy-handle, it could be referencing a phy on another MDIO controller's bus[1]. In this case, where the MDIO bus is not described at all, currently stmmac will make a MDIO bus and scan its address space to discover phys (of which there are none). This process takes time scanning a bus that is known to be empty, delaying time to complete probe. There are also a lot of upstream devicetrees[2] that expect a MDIO bus to be created, scanned for phys, and the first one found connected to the MAC. This case can be inferred from the platform description by not having a phy-handle && not being fixed-link. This hits case "1" in the current driver's logic, and must be handled in any logic change here since it is a valid legacy dt-binding. Let's improve the logic to create a MDIO bus if either: - Devicetree contains a MDIO bus - !fixed-link && !phy-handle (legacy handling) This way the case where no MDIO bus should be made is handled, as well as retaining backwards compatibility with the valid cases. Below devicetree snippets can be found that explain some of the cases above more concretely. Here's[0] a devicetree example where the MAC is both fixed-link and driving a switch on MDIO (case "2" above). This needs a MDIO bus to be created: &fec1 { phy-mode = "rmii"; fixed-link { speed = <100>; full-duplex; }; mdio1: mdio { switch0: switch0@0 { compatible = "marvell,mv88e6190"; pinctrl-0 = <&pinctrl_gpio_switch0>; }; }; }; Here's[1] an example where there is no MDIO bus or fixed-link for the ethernet1 MAC, so no MDIO bus should be created since ethernet0 is the MDIO master for ethernet1's phy: &ethernet0 { phy-mode = "sgmii"; phy-handle = <&sgmii_phy0>; mdio { compatible = "snps,dwmac-mdio"; sgmii_phy0: phy@8 { compatible = "ethernet-phy-id0141.0dd4"; reg = <0x8>; device_type = "ethernet-phy"; }; sgmii_phy1: phy@a { compatible = "ethernet-phy-id0141.0dd4"; reg = <0xa>; device_type = "ethernet-phy"; }; }; }; &ethernet1 { phy-mode = "sgmii"; phy-handle = <&sgmii_phy1>; }; Finally there's descriptions like this[2] which don't describe the MDIO bus but expect it to be created and the whole address space scanned for a phy since there's no phy-handle or fixed-link described: &gmac { phy-supply = <&vcc_lan>; phy-mode = "rmii"; snps,reset-gpio = <&gpio3 RK_PB4 GPIO_ACTIVE_HIGH>; snps,reset-active-low; snps,reset-delays-us = <0 10000 1000000>; }; [0] https://elixir.bootlin.com/linux/v6.5-rc5/source/arch/arm/boot/dts/nxp/vf/vf610-zii-ssmb-dtu.dts [1] https://elixir.bootlin.com/linux/v6.6-rc5/source/arch/arm64/boot/dts/qcom/sa8775p-ride.dts [2] https://elixir.bootlin.com/linux/v6.6-rc5/source/arch/arm64/boot/dts/rockchip/rk3368-r88.dts#L164 Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Co-developed-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Signed-off-by: Andrew Halaney <ahalaney@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2023-12-15md/raid1: remove unnecessary null checkingGou Hao
If %__GFP_DIRECT_RECLAIM is set then bio_alloc_bioset will always be able to allocate a bio. See comment of bio_alloc_bioset. Signed-off-by: Gou Hao <gouhao@uniontech.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231214151458.28970-1-gouhao@uniontech.com
2023-12-15gnss: ubx: add support for the reset gpioWolfram Sang
The Renesas KingFisher board includes a U-Blox Neo-M8 chip. This chip has a reset pin which is also wired on the board. When Linux starts, reset is asserted by the firmware. Deassert the reset pin when probing this driver. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> [ johan: rename gpio descriptor variable ] Signed-off-by: Johan Hovold <johan@kernel.org>
2023-12-15gnss: ubx: use new helper to remove open coded regulator handlingWolfram Sang
v_bckp shall always be enabled as long as the device exists. We now have a regulator helper for that, use it. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Johan Hovold <johan@kernel.org>
2023-12-15platform/chrome: cros_ec_vbc: Fix -Warray-bounds warningsGustavo A. R. Silva
GCC-13 (and Clang) does not like having a partially allocated object, since it cannot reason about it for bounds checking. Notice that the compiler is legitimately complaining about accessing an object (params, in this case) for which not enough memory was allocated. The object is of size 20 bytes: struct ec_params_vbnvcontext { uint32_t op; /* 0 4 */ uint8_t block[16]; /* 4 16 */ /* size: 20, cachelines: 1, members: 2 */ /* last cacheline: 20 bytes */ }; but only 16 bytes are allocated: sizeof(struct ec_response_vbnvcontext) == 16 In this case, as only enough space for the op field is allocated, we can use an object of type uint32_t instead of a whole struct ec_params_vbnvcontext (for which not enough memory is allocated). Fix the following warning seen under GCC 13: drivers/platform/chrome/cros_ec_vbc.c: In function ‘vboot_context_read’: drivers/platform/chrome/cros_ec_vbc.c:36:15: warning: array subscript ‘struct ec_params_vbnvcontext[1]’ is partly outside array bounds of ‘unsigned char[36]’ [-Warray-bounds=] 36 | params->op = EC_VBNV_CONTEXT_OP_READ; | ^~ In file included from drivers/platform/chrome/cros_ec_vbc.c:12: In function ‘kmalloc’, inlined from ‘vboot_context_read’ at drivers/platform/chrome/cros_ec_vbc.c:30:8: ./include/linux/slab.h:580:24: note: at offset 20 into object of size 36 allocated by ‘kmalloc_trace’ 580 | return kmalloc_trace( | ^~~~~~~~~~~~~~ 581 | kmalloc_caches[kmalloc_type(flags)][index], | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 582 | flags, size); | ~~~~~~~~~~~~ Link: https://github.com/KSPP/linux/issues/278 Signed-off-by: "Gustavo A. R. Silva" <gustavoars@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/ZCTrutoN+9TiJM8u@work Signed-off-by: Tzung-Bi Shih <tzungbi@kernel.org>
2023-12-15USB: serial: option: add Quectel RM500Q R13 firmware supportReinhard Speyerer
Add support for Quectel RM500Q R13 firmware which uses Prot=40 for the NMEA port: T: Bus=02 Lev=01 Prnt=01 Port=01 Cnt=01 Dev#= 8 Spd=5000 MxCh= 0 D: Ver= 3.20 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 9 #Cfgs= 1 P: Vendor=2c7c ProdID=0800 Rev= 4.14 S: Manufacturer=Quectel S: Product=RM500Q-AE S: SerialNumber=xxxxxxxx C:* #Ifs= 5 Cfg#= 1 Atr=a0 MxPwr=896mA I:* If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=30 Driver=option E: Ad=81(I) Atr=02(Bulk) MxPS=1024 Ivl=0ms E: Ad=01(O) Atr=02(Bulk) MxPS=1024 Ivl=0ms I:* If#= 1 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=40 Driver=option E: Ad=83(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=82(I) Atr=02(Bulk) MxPS=1024 Ivl=0ms E: Ad=02(O) Atr=02(Bulk) MxPS=1024 Ivl=0ms I:* If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=85(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=84(I) Atr=02(Bulk) MxPS=1024 Ivl=0ms E: Ad=03(O) Atr=02(Bulk) MxPS=1024 Ivl=0ms I:* If#= 3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=87(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=86(I) Atr=02(Bulk) MxPS=1024 Ivl=0ms E: Ad=04(O) Atr=02(Bulk) MxPS=1024 Ivl=0ms I:* If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan E: Ad=88(I) Atr=03(Int.) MxPS= 8 Ivl=32ms E: Ad=8e(I) Atr=02(Bulk) MxPS=1024 Ivl=0ms E: Ad=0f(O) Atr=02(Bulk) MxPS=1024 Ivl=0ms Signed-off-by: Reinhard Speyerer <rspmn@arcor.de> Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold <johan@kernel.org>
2023-12-15USB: serial: option: add Foxconn T99W265 with new baselineSlark Xiao
This ID was added based on latest SDX12 code base line, and we made some changes with previous 0489:e0db. Test evidence as below: T: Bus=02 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 3 Spd=5000 MxCh= 0 D: Ver= 3.20 Cls=ef(misc ) Sub=02 Prot=01 MxPS= 9 #Cfgs= 2 P: Vendor=0489 ProdID=e0da Rev=05.04 S: Manufacturer=Qualcomm S: Product=Qualcomm Snapdragon X12 S: SerialNumber=2bda65fb C: #Ifs= 6 Cfg#= 2 Atr=a0 MxPwr=896mA I: If#=0x0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=0e Prot=00 Driver=cdc_mbim I: If#=0x1 Alt= 1 #EPs= 2 Cls=0a(data ) Sub=00 Prot=02 Driver=cdc_mbim I: If#=0x2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=40 Driver=option I: If#=0x3 Alt= 0 #EPs= 1 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none) I: If#=0x4 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=30 Driver=option I: If#=0x5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=42 Prot=01 Driver=(none) 0&1: MBIM, 2: Modem, 3:GNSS, 4:Diag, 5:ADB Signed-off-by: Slark Xiao <slark_xiao@163.com> Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold <johan@kernel.org>
2023-12-15drm/i915/mtl: Fix HDMI/DP PLL clock selectionImre Deak
Select the HDMI specific PLL clock only for HDMI outputs. Fixes: 62618c7f117e ("drm/i915/mtl: C20 PLL programming") Cc: Mika Kahola <mika.kahola@intel.com> Cc: Radhakrishna Sripada <radhakrishna.sripada@intel.com> Reviewed-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231213220526.1828827-1-imre.deak@intel.com