summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-09-06pm:cpupower: Add error warning when SWIG is not installedJohn B. Wyatt IV
Add error message to better explain to the user when SWIG and python-config is missing from the path. Makefile was cleaned up and unneeded elements were removed. Suggested-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: John B. Wyatt IV <jwyatt@redhat.com> Signed-off-by: John B. Wyatt IV <sageofredondo@gmail.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2024-09-06selftests:resctrl: Fix build failure on archs without __cpuid_count()Shuah Khan
When resctrl is built on architectures without __cpuid_count() support, build fails. resctrl uses __cpuid_count() defined in kselftest.h. Even though the problem is seen while building resctrl on aarch64, this error can be seen on any platform that doesn't support CPUID. CPUID is a x86/x86-64 feature and code paths with CPUID asm commands will fail to build on all other architectures. All others tests call __cpuid_count() do so from x86/x86_64 code paths when _i386__ or __x86_64__ are defined. resctrl is an exception. Fix the problem by defining __cpuid_count() only when __i386__ or __x86_64__ are defined in kselftest.h and changing resctrl to call __cpuid_count() only when __i386__ or __x86_64__ are defined. In file included from resctrl.h:24, from cat_test.c:11: In function ‘arch_supports_noncont_cat’, inlined from ‘noncont_cat_run_test’ at cat_test.c:326:6: ../kselftest.h:74:9: error: impossible constraint in ‘asm’ 74 | __asm__ __volatile__ ("cpuid\n\t" \ | ^~~~~~~ cat_test.c:304:17: note: in expansion of macro ‘__cpuid_count’ 304 | __cpuid_count(0x10, 1, eax, ebx, ecx, edx); | ^~~~~~~~~~~~~ ../kselftest.h:74:9: error: impossible constraint in ‘asm’ 74 | __asm__ __volatile__ ("cpuid\n\t" \ | ^~~~~~~ cat_test.c:306:17: note: in expansion of macro ‘__cpuid_count’ 306 | __cpuid_count(0x10, 2, eax, ebx, ecx, edx); Fixes: ae638551ab64 ("selftests/resctrl: Add non-contiguous CBMs CAT test") Reported-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Closes: https://lore.kernel.org/lkml/20240809071059.265914-1-usama.anjum@collabora.com/ Reported-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> Acked-by: Reinette Chatre <reinette.chatre@intel.com> Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2024-09-06affs: Replace one-element array with flexible-array memberThorsten Blum
Replace the deprecated one-element array with a modern flexible-array member in the struct affs_root_head. Add a comment that most struct members are not used, but kept as documentation. Link: https://github.com/KSPP/linux/issues/79 Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-09-06affs: Remove unused macros GET_END_PTR, AFFS_GET_HASHENTRYThorsten Blum
The macros GET_END_PTR() and AFFS_GET_HASHENTRY() are not used anymore and can be removed. Remove them. Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2024-09-06mtd: powernv: Add check devm_kasprintf() returned valueCharles Han
devm_kasprintf() can return a NULL pointer on failure but this returned value is not checked. Fixes: acfe63ec1c59 ("mtd: Convert to using %pOFn instead of device_node.name") Signed-off-by: Charles Han <hanchunchao@inspur.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240828092427.128177-1-hanchunchao@inspur.com
2024-09-06mtd: rawnand: mtk: Factorize out the logic cleaning mtk chipsMiquel Raynal
There are some un-freed resources in one of the error path which would benefit from a helper going through all the registered mtk chips one by one and perform all the necessary cleanup. This is precisely what the remove path does, so let's extract the logic in a helper. There is no functional change. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Reviewed-by: Matthias Brugger <matthias.bgg@kernel.org> Link: https://lore.kernel.org/linux-mtd/20240826153019.67106-1-miquel.raynal@bootlin.com
2024-09-06mtd: rawnand: atmel: Add message on DMA usageAlexander Dahl
Like for other atmel drivers (serial, crypto, mmc, …), too. Signed-off-by: Alexander Dahl <ada@thorsis.com> Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240828063707.73869-1-ada@thorsis.com
2024-09-06mtd: rawnand: meson: Fix typo in function nameMiquel Raynal
There is a reason why sometime we write "NAND chip" with an 's'. It usually means several chips can be managed by the same controller. So when initializing a single chip at a time, the wording "chip" must be used, otherwise when talking about all the chips managed by the controller, we want to use "chips". Fix the function name to clarify the meson_nfc_nand_chip*s*_cleanup() helper intend. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Link: https://lore.kernel.org/linux-mtd/20240826153158.67334-1-miquel.raynal@bootlin.com
2024-09-06mtd: spi-nand: macronix: Continuous read supportMiquel Raynal
Enabling continuous read support implies several changes which must be done atomically in order to keep the code base consistent and bisectable. 1/ Retrieving bitflips differently Improve the helper retrieving the number of bitflips to support the case where many pages have been read instead of just one. In this case, if there is one page with bitflips, we cannot know the detail and just get the information of the maximum number of bitflips corrected in the most corrupted chunk. Compatible Macronix flashes return: - the ECC status for the last page read (bits 0-3), - the amount of bitflips for the whole read operation (bits 4-7). Hence, when reading two consecutive pages, if there was 2 bits corrected at most in one chunk, we return this amount times (arbitrary) the number read pages. It is probably a very pessimistic calculation in most cases, but still less pessimistic than if we multiplied this amount by the number of chunks. Anyway, this is just for statistics, the important data is the maximum amount of bitflips, which leads to wear leveling. 2/ Configuring, enabling and disabling the feature Create an init function for allocating a vendor structure. Use this vendor structure to cache the internal continuous read state. The state is being used to discriminate between the two bitflips retrieval methods. Finally, helpers for enabling and disabling sequential reads are also created. 3/ Fill the chips table Flag all the chips supporting the feature with the ->set_cont_read() helper. In order to validate the changes, I modified the mtd-utils test suite with extended versions of nandbiterrs, nanddump and flash_speed in order to support, test and benchmark continuous reads. I also ran all the UBI tests successfully. The nandbiterrs tool allows to track the ECC efficiency and feedback. Here is its default output (stripped): Successfully corrected 0 bit errors per subpage Read reported 1 corrected bit errors Successfully corrected 1 bit errors per subpage Read reported 2 corrected bit errors Successfully corrected 2 bit errors per subpage Read reported 3 corrected bit errors Successfully corrected 3 bit errors per subpage Read reported 4 corrected bit errors Successfully corrected 4 bit errors per subpage Read reported 5 corrected bit errors Successfully corrected 5 bit errors per subpage Read reported 6 corrected bit errors Successfully corrected 6 bit errors per subpage Read reported 7 corrected bit errors Successfully corrected 7 bit errors per subpage Read reported 8 corrected bit errors Successfully corrected 8 bit errors per subpage Failed to recover 1 bitflips Read error after 9 bit errors per page The output using the continuous option over two pages (the second page is kept intact): Successfully corrected 0 bit errors per subpage Read reported 2 corrected bit errors Successfully corrected 1 bit errors per subpage Read reported 4 corrected bit errors Successfully corrected 2 bit errors per subpage Read reported 6 corrected bit errors Successfully corrected 3 bit errors per subpage Read reported 8 corrected bit errors Successfully corrected 4 bit errors per subpage Read reported 10 corrected bit errors Successfully corrected 5 bit errors per subpage Read reported 12 corrected bit errors Successfully corrected 6 bit errors per subpage Read reported 14 corrected bit errors Successfully corrected 7 bit errors per subpage Read reported 16 corrected bit errors Successfully corrected 8 bit errors per subpage Failed to recover 1 bitflips Read error after 9 bit errors per page Regarding the throughput improvements, tests have been conducted in 1-1-1 and 1-1-4 modes, reading a full block X pages at a time, X ranging from 1 to 64 (size of a block with the tested device). The percent value on the right is the comparison of the same test conducted without the continuous read feature, ie. reading X pages in one single user request, which got naturally split by the core whit the continuous read optimization disabled into single-page reads. * 1-1-1 result: 1 page read speed is 2634 KiB/s 2 page read speed is 2704 KiB/s (+3%) 3 page read speed is 2747 KiB/s (+5%) 4 page read speed is 2804 KiB/s (+7%) 5 page read speed is 2782 KiB/s 6 page read speed is 2826 KiB/s 7 page read speed is 2834 KiB/s 8 page read speed is 2821 KiB/s 9 page read speed is 2846 KiB/s 10 page read speed is 2819 KiB/s 11 page read speed is 2871 KiB/s (+10%) 12 page read speed is 2823 KiB/s 13 page read speed is 2880 KiB/s 14 page read speed is 2842 KiB/s 15 page read speed is 2862 KiB/s 16 page read speed is 2837 KiB/s 32 page read speed is 2879 KiB/s 64 page read speed is 2842 KiB/s * 1-1-4 result: 1 page read speed is 7562 KiB/s 2 page read speed is 8904 KiB/s (+15%) 3 page read speed is 9655 KiB/s (+25%) 4 page read speed is 10118 KiB/s (+30%) 5 page read speed is 10084 KiB/s 6 page read speed is 10300 KiB/s 7 page read speed is 10434 KiB/s (+35%) 8 page read speed is 10406 KiB/s 9 page read speed is 10769 KiB/s (+40%) 10 page read speed is 10666 KiB/s 11 page read speed is 10757 KiB/s 12 page read speed is 10835 KiB/s 13 page read speed is 10976 KiB/s 14 page read speed is 11200 KiB/s 15 page read speed is 11009 KiB/s 16 page read speed is 11082 KiB/s 32 page read speed is 11352 KiB/s (+45%) 64 page read speed is 11403 KiB/s This work has received support and could be achieved thanks to Alvin Zhou <alvinzhou@mxic.com.tw>. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-10-miquel.raynal@bootlin.com
2024-09-06mtd: spi-nand: macronix: Add a possible bitflip status flagMiquel Raynal
Macronix SPI-NANDs encode the ECC status into two bits. There are three standard situations (no bitflip, bitflips, error), and an additional possible situation which is only triggered when configuring the 0x10 configuration register, allowing to know, if there have been bitflips, whether the maximum amount of bitflips was above a configurable threshold or not. In all cases, for now, s this configuration register is unset, it means the same as "there are bitflips". This value is maybe standard, maybe not. For now, let's define it in the Macronix driver, we can safely move it to a shared place later if that is relevant. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-9-miquel.raynal@bootlin.com
2024-09-06mtd: spi-nand: macronix: Extract the bitflip retrieval logicMiquel Raynal
With GET_STATUS commands, SPI-NAND devices can tell the status of the last read operation, in particular if there was: - no bitflips - corrected bitflips - uncorrectable bitflips The next step then to read an ECC status register and retrieve the amount of bitflips, when relevant, if possible. The logic used here works well for now, but will no longer apply to continuous reads. In order to prepare the introduction of continuous reads, let's factorize out the code that is specific to single-page reads. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-8-miquel.raynal@bootlin.com
2024-09-06mtd: spi-nand: macronix: Fix helper nameMiquel Raynal
Use "macronix_" instead of "mx35lf1ge4ab_" as common prefix for the ->get_status() callback name. This callback is used by many different families, there is no variation in the implementation so far. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-7-miquel.raynal@bootlin.com
2024-09-06mtd: spi-nand: Expose spinand_write_reg_op()Miquel Raynal
This helper function will soon be used from a vendor driver, let's export it through the spinand.h header. No need for any export, as there is currently no reason for any module to need it. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-6-miquel.raynal@bootlin.com
2024-09-06mtd: spi-nand: Add continuous read supportMiquel Raynal
A regular page read consist in: - Asking one page of content from the NAND array to be loaded in the chip's SRAM, - Waiting for the operation to be done, - Retrieving the data (I/O phase) from the chip's SRAM. When reading several sequential pages, the above operation is repeated over and over. There is however a way to optimize these accesses, by enabling continuous reads. The feature requires the NAND chip to have a second internal SRAM area plus a bit of additional internal logic to trigger another internal transfer between the NAND array and the second SRAM area while the I/O phase is ongoing. Once the first I/O phase is done, the host can continue reading more data, continuously, as the chip will automatically switch to the second SRAM content (which has already been loaded) and in turns trigger the next load into the first SRAM area again. From an instruction perspective, the command op-codes are different, but the same cycles are required. The only difference is that after a continuous read (which is stopped by a CS deassert), the host must observe a delay of tRST. However, because there is no guarantee in Linux regarding the actual state of the CS pin after a transfer (in order to speed-up the next transfer if targeting the same device), it was necessary to manually end the continuous read with a configuration register write operation. Continuous reads have two main drawbacks: * They only work on full pages (column address ignored) * Only the main data area is pulled, out-of-band bytes are not accessible. Said otherwise, the feature can only be useful with on-die ECC engines. Performance wise, measures have been performed on a Zynq platform using Macronix SPI-NAND controller with a Macronix chip (based on the flash_speed tool modified for testing sequential reads): - 1-1-1 mode: performances improved from +3% (2-pages) up to +10% after a dozen pages. - 1-1-4 mode: performances improved from +15% (2-pages) up to +40% after a dozen pages. This series is based on a previous work from Macronix engineer Jaime Liao. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-5-miquel.raynal@bootlin.com
2024-09-06mtd: spi-nand: Isolate the MTD read logic in a helperMiquel Raynal
There is currently only a single path for performing page reads as requested by the MTD layer. Soon there will be two: - a "regular" page read - a continuous page read Let's extract the page read logic in a dedicated helper, so the introduction of continuous page reads will be as easy as checking whether continuous reads shall/can be used and calling one helper or the other. There is not behavioral change intended. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-4-miquel.raynal@bootlin.com
2024-09-06mtd: nand: Introduce a block iteratorMiquel Raynal
In order to be able to iterate easily across eraseblocks rather than pages, let's introduce a block iterator inspired from the page iterator. The main usage of this iterator will be for continuous/sequential reads, where it is interesting to use a single request rather than split the requests in smaller chunks (ie. pages) that can be hardly optimized. So a "continuous" boolean get's added for this purpose. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-3-miquel.raynal@bootlin.com
2024-09-06mtd: nand: Rename the NAND IO iteration helperMiquel Raynal
Soon a helper for iterating over blocks will be needed (for continuous read purposes). In order to clarify the intend of this helper, let's rename it with the "page" wording inside. While at it, improve the doc and fix a typo. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Link: https://lore.kernel.org/linux-mtd/20240826101412.20644-2-miquel.raynal@bootlin.com
2024-09-06mtd: rawnand: sunxi: Use for_each_child_of_node_scoped()Jinjie Ruan
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-11-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: stm32_fmc2: Use for_each_child_of_node_scoped()Jinjie Ruan
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-10-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: renesas: Use for_each_child_of_node_scoped()Jinjie Ruan
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-9-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: mtk: Use for_each_child_of_node_scoped()Jinjie Ruan
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-8-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: meson: Use for_each_child_of_node_scoped()Jinjie Ruan
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-7-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: rockchip: Use for_each_child_of_node_scoped()Jinjie Ruan
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-6-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: marvell: drm/rockchip: Use for_each_child_of_node_scoped()Jinjie Ruan
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-5-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: pl353: Use for_each_child_of_node_scoped()Jinjie Ruan
Avoids the need for manual cleanup of_node_put() in early exits from the loop by using for_each_child_of_node_scoped(). Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-4-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: cadence: Use for_each_child_of_node_scoped()Jinjie Ruan
Avoids the need for manual cleanup of_node_put() in early exits from the loop by using for_each_child_of_node_scoped(). Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-3-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: arasan: Use for_each_child_of_node_scoped()Jinjie Ruan
Avoids the need for manual cleanup of_node_put() in early exits from the loop. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826094328.2991664-2-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: denali: Use the devm_clk_get_enabled() helper functionJinjie Ruan
The devm_clk_get_enabled() helper: - calls devm_clk_get() - calls clk_prepare_enable() and registers what is needed in order to call clk_disable_unprepare() when needed, as a managed resource. This simplifies the code. Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826080408.2522978-1-ruanjinjie@huawei.com
2024-09-06mtd: rawnand: denali: Fix missing pci_release_regions in probe and removeChen Ridong
The pci_release_regions was miss at error case, just add it. Signed-off-by: Chen Ridong <chenridong@huawei.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20240826024339.476921-1-chenridong@huawei.com
2024-09-06ASoC: codecs: Use devm_clk_get_enabled() helpersying zuxin
The devm_clk_get_enabled() helpers: - call devm_clk_get() - call clk_prepare_enable() and register what is needed in order to call clk_disable_unprepare() when needed, as a managed resource. This simplifies the code and avoids the calls to clk_disable_unprepare(). Signed-off-by: ying zuxin <11154159@vivo.com> Link: https://patch.msgid.link/20240906084841.19248-1-yingzuxin@vivo.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-09-06zram: Shrink zram_table_entry::flags.Sebastian Andrzej Siewior
The zram_table_entry::flags member is of type long and uses 8 bytes on a 64bit architecture. With a PAGE_SIZE of 256KiB we have PAGE_SHIFT of 18 which in turn leads to __NR_ZRAM_PAGEFLAGS = 27. This still fits in an ordinary integer. By reducing the size of `flags' to four bytes, the size of the struct goes back to 16 bytes. The padding between the lock and ac_time (if enabled) is also gone. Make zram_table_entry::flags an unsigned int and update the build test to reflect the change. Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/r/20240906141520.730009-4-bigeasy@linutronix.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-09-06zram: Remove ZRAM_LOCKSebastian Andrzej Siewior
The ZRAM_LOCK was used for locking and after the addition of spinlock_t the bit set and cleared but there no reader of it. Remove the ZRAM_LOCK bit. Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/r/20240906141520.730009-3-bigeasy@linutronix.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-09-06zram: Replace bit spinlocks with a spinlock_t.Mike Galbraith
The bit spinlock disables preemption. The spinlock_t lock becomes a sleeping lock on PREEMPT_RT and it can not be acquired in this context. In this locked section, zs_free() acquires a zs_pool::lock, and there is access to zram::wb_limit_lock. Add a spinlock_t for locking. Keep the set/ clear ZRAM_LOCK bit after the lock has been acquired/ dropped. The size of struct zram_table_entry increases by 4 bytes due to lock and additional 4 bytes padding with CONFIG_ZRAM_TRACK_ENTRY_ACTIME enabled. Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/r/20240906141520.730009-2-bigeasy@linutronix.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-09-06nbd: correct the maximum value for discard sectorsWouter Verhelst
The version of the NBD protocol implemented by the kernel driver currently has a 32 bit field for length values. As the NBD protocol uses bytes as a unit of length, length values larger than 2^32 bytes cannot be expressed. Update the max_hw_discard_sectors field to match that. Signed-off-by: Wouter Verhelst <w@uter.be> Fixes: 268283244c0f ("nbd: use the atomic queue limits API in nbd_set_size") Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Cc: Eric Blake <eblake@redhat.Com> Link: https://lore.kernel.org/r/20240812133032.115134-8-w@uter.be Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-09-06nbd: nbd_bg_flags_show: add NBD_FLAG_ROTATIONALWouter Verhelst
Also handle NBD_FLAG_ROTATIONAL in our debug helper function Signed-off-by: Wouter Verhelst <w@uter.be> Cc: Eric Blake <eblake@redhat.Com> Link: https://lore.kernel.org/r/20240812133032.115134-6-w@uter.be Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-09-06nbd: implement the WRITE_ZEROES commandWouter Verhelst
The NBD protocol defines a message for zeroing out a region of an export Add support to the kernel driver for that message. Signed-off-by: Wouter Verhelst <w@uter.be> Cc: Eric Blake <eblake@redhat.com> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Link: https://lore.kernel.org/r/20240812133032.115134-3-w@uter.be Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-09-06genirq/cpuhotplug: Use cpumask_intersects()Costa Shulyupin
Replace `cpumask_any_and(a, b) >= nr_cpu_ids` with the more readable `!cpumask_intersects(a, b)`. [ tglx: Massaged change log ] Signed-off-by: Costa Shulyupin <costa.shul@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/all/20240904134823.777623-2-costa.shul@redhat.com
2024-09-06orangefs: Constify struct kobj_typeHuang Xiaojia
'struct kobj_type' is not modified. It is only used in kobject_init() which takes a 'const struct kobj_type *ktype' parameter. Constifying this structure moves some data to a read-only section, so increase over all security. On a x86_64, compiled with defconfig: Before: ====== text data bss dec hex filename 7036 2136 56 9228 240c fs/orangefs/orangefs-sysfs.o After: ====== text data bss dec hex filename 7484 1880 56 9420 24cc fs/orangefs/orangefs-sysfs.o Signed-off-by: Huang Xiaojia <huangxiaojia2@huawei.com> Signed-off-by: Mike Marshall <hubcap@omnibond.com>
2024-09-06ASoC: makes rtd->initialized bit fieldKuninori Morimoto
rtd->initialized is used to know whether soc_init_pcm_runtime() was correctly fined, and used to call snd_soc_link_exit(). We don't need to have it as bool, let's make it bit-field same as other flags. Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Cc: Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com> Cc: Cezary Rojewski <cezary.rojewski@intel.com> Link: https://patch.msgid.link/87o752k7gq.wl-kuninori.morimoto.gx@renesas.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-09-06MAINTAINERS: Move the BFQ io scheduler to Odd Fixes stateYu Kuai
BFQ has been lacking active maintenance for approximately two years, and it was recently transitioned to the Orphan state. However, there are still many users, I have decided to step forward and assume the role of maintainer to ensure continued support and development. While I may not be the one with the most extensive knowledge of BFQ's internals, I have been actively involved in its development since 2021. Moreover, our team continues to rigorously test BFQ in downstream kernels, ensuring it's stability and performance. Despite my confidence to maintain BFQ, I believe it is prudent to classify its state as "Odd Fixes" to accurately reflect my relatively new position as the maintainer. By assuming this responsibility, I am committed to providing the necessary support and addressing any issues that may arise with BFQ. As time progresses, we will reassess the situation and determine the appropriate state. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Link: https://lore.kernel.org/r/20240906102153.612997-1-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-09-06iommu/arm-smmu-v3: Use the new rb tree helpersJason Gunthorpe
Since v5.12 the rbtree has gained some simplifying helpers aimed at making rb tree users write less convoluted boiler plate code. Instead the caller provides a single comparison function and the helpers generate the prior open-coded stuff. Update smmu->streams to use rb_find_add() and rb_find(). Tested-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v3-9fef8cdc2ff6+150d1-smmuv3_tidy_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
2024-09-06platform/x86:intel/pmc: Fix comment for the ↵Marek Maslanka
pmc_core_acpi_pm_timer_suspend_resume function Change incorrect kernel-doc comment to regular comment for the pmc_core_acpi_pm_timer_suspend_resume function. Signed-off-by: Marek Maslanka <mmaslanka@google.com> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202409031410.a9beukFc-lkp@intel.com/ Link: https://lore.kernel.org/r/20240904094753.1615549-1-mmaslanka@google.com Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2024-09-06clocksource/drivers/jcore: Use request_percpu_irq()Uros Bizjak
Use request_percpu_irq() instead of request_irq() to solve the following sparse warning: jcore-pit.c:173:40: warning: incorrect type in argument 5 (different address spaces) jcore-pit.c:173:40: expected void *dev jcore-pit.c:173:40: got struct jcore_pit [noderef] __percpu *static [assigned] [toplevel] jcore_pit_percpu Compile tested only. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rich Felker <dalias@libc.org> Link: https://lore.kernel.org/r/20240902104810.21080-1-ubizjak@gmail.com Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2024-09-06clocksource/drivers/cadence-ttc: Add missing clk_disable_unprepare in ↵Gaosheng Cui
ttc_setup_clockevent Add the missing clk_disable_unprepare() before return in ttc_setup_clockevent(). Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Reviewed-by: Michal Simek <michal.simek@amd.com> Link: https://lore.kernel.org/r/20240803064253.331946-3-cuigaosheng1@huawei.com Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2024-09-06clocksource/drivers/asm9260: Add missing clk_disable_unprepare in ↵Gaosheng Cui
asm9260_timer_init Add the missing clk_disable_unprepare() before return in asm9260_timer_init(). Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Link: https://lore.kernel.org/r/20240803064253.331946-2-cuigaosheng1@huawei.com Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2024-09-06clocksource/drivers/qcom: Add missing iounmap() on errors in msm_dt_timer_init()Ankit Agrawal
Add the missing iounmap() when clock frequency fails to get read by the of_property_read_u32() call, or if the call to msm_timer_init() fails. Fixes: 6e3321631ac2 ("ARM: msm: Add DT support to msm_timer") Signed-off-by: Ankit Agrawal <agrawal.ag.ankit@gmail.com> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20240713095713.GA430091@bnew-VirtualBox Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2024-09-06clocksource/drivers/ingenic: Use devm_clk_get_enabled() helpersHuan Yang
The devm_clk_get_enabled() helpers: - call devm_clk_get() - call clk_prepare_enable() and register what is needed in order to call clk_disable_unprepare() when needed, as a managed resource. This simplifies the code and avoids the calls to clk_disable_unprepare(). Signed-off-by: Huan Yang <link@vivo.com> Link: https://lore.kernel.org/r/20240820094603.103598-1-link@vivo.com Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2024-09-06platform/x86:intel/pmc: Enable the ACPI PM Timer to be turned off when suspendedMarek Maslanka
Allow to disable ACPI PM Timer on suspend and enable on resume. A disabled timer helps optimise power consumption when the system is suspended. On resume the timer is only reactivated if it was activated prior to suspend, so unless the ACPI PM timer is enabled in the BIOS, this won't change anything. The ACPI PM timer is used by Intel's iTCO/wdat_wdt watchdog to drive the watchdog, so it doesn't need to run during suspend. Signed-off-by: Marek Maslanka <mmaslanka@google.com> Reviewed-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20240812184208.1080710-1-mmaslanka@google.com Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2024-09-06clocksource: acpi_pm: Add external callback for suspend/resumeMarek Maslanka
Provides the capability to register an external callback for the ACPI PM timer, which is called during the suspend and resume processes. Signed-off-by: Marek Maslanka <mmaslanka@google.com> Reviewed-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20240812184150.1079924-1-mmaslanka@google.com Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2024-09-06clocksource/drivers/arm_arch_timer: Using ↵Zhang Zekun
for_each_available_child_of_node_scoped() for_each_available_child_of_node_scoped() can put the device_node automatically. So, using it to make the code logic more simple and remove the device_node clean up code. Signed-off-by: Zhang Zekun <zhangzekun11@huawei.com> Link: https://lore.kernel.org/r/20240807074655.52157-1-zhangzekun11@huawei.com Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>