summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2024-07-10f2fs: clean up addrs_per_{inode,block}()Chao Yu
Introduce a new help addrs_per_page() to wrap common code from addrs_per_inode() and addrs_per_block() for cleanup. Signed-off-by: Chao Yu <chao@kernel.org> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2024-07-10regmap: Implement regmap_multi_reg_read()Mark Brown
Merge series from Guenter Roeck <linux@roeck-us.net>: regmap_multi_reg_read() is similar to regmap_bilk_read() but reads from an array of non-sequential registers. It is helpful if multiple non- sequential registers need to be read in a single operation which would otherwise have to be mutex protected. The name of the new function was chosen to match the existing function regmap_multi_reg_write().
2024-07-10Merge tag 'mm-hotfixes-stable-2024-07-10-13-19' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull misc fixes from Andrew Morton: "21 hotfixes, 15 of which are cc:stable. No identifiable theme here - all are singleton patches, 19 are for MM" * tag 'mm-hotfixes-stable-2024-07-10-13-19' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (21 commits) mm/hugetlb: fix kernel NULL pointer dereference when migrating hugetlb folio mm/hugetlb: fix potential race in __update_and_free_hugetlb_folio() filemap: replace pte_offset_map() with pte_offset_map_nolock() arch/xtensa: always_inline get_current() and current_thread_info() sched.h: always_inline alloc_tag_{save|restore} to fix modpost warnings MAINTAINERS: mailmap: update Lorenzo Stoakes's email address mm: fix crashes from deferred split racing folio migration lib/build_OID_registry: avoid non-destructive substitution for Perl < 5.13.2 compat mm: gup: stop abusing try_grab_folio nilfs2: fix kernel bug on rename operation of broken directory mm/hugetlb_vmemmap: fix race with speculative PFN walkers cachestat: do not flush stats in recency check mm/shmem: disable PMD-sized page cache if needed mm/filemap: skip to create PMD-sized page cache if needed mm/readahead: limit page cache size in page_cache_ra_order() mm/filemap: make MAX_PAGECACHE_ORDER acceptable to xarray mm/damon/core: merge regions aggressively when max_nr_regions is unmet Fix userfaultfd_api to return EINVAL as expected mm: vmalloc: check if a hash-index is in cpu_possible_mask mm: prevent derefencing NULL ptr in pfn_section_valid() ...
2024-07-10riscv: Optimize crc32 with Zbc extensionXiao Wang
As suggested by the B-ext spec, the Zbc (carry-less multiplication) instructions can be used to accelerate CRC calculations. Currently, the crc32 is the most widely used crc function inside kernel, so this patch focuses on the optimization of just the crc32 APIs. Compared with the current table-lookup based optimization, Zbc based optimization can also achieve large stride during CRC calculation loop, meantime, it avoids the memory access latency of the table-lookup based implementation and it reduces memory footprint. If Zbc feature is not supported in a runtime environment, then the table-lookup based implementation would serve as fallback via alternative mechanism. By inspecting the vmlinux built by gcc v12.2.0 with default optimization level (-O2), we can see below instruction count change for each 8-byte stride in the CRC32 loop: rv64: crc32_be (54->31), crc32_le (54->13), __crc32c_le (54->13) rv32: crc32_be (50->32), crc32_le (50->16), __crc32c_le (50->16) The compile target CPU is little endian, extra effort is needed for byte swapping for the crc32_be API, thus, the instruction count change is not as significant as that in the *_le cases. This patch is tested on QEMU VM with the kernel CRC32 selftest for both rv64 and rv32. Running the CRC32 selftest on a real hardware (SpacemiT K1) with Zbc extension shows 65% and 125% performance improvement respectively on crc32_test() and crc32c_test(). Signed-off-by: Xiao Wang <xiao.w.wang@intel.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20240621054707.1847548-1-xiao.w.wang@intel.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-07-10mm: zswap: fix zswap_never_enabled() for CONFIG_ZSWAP==NBarry Song
If CONFIG_ZSWAP is set to N, it means zswap cannot be enabled. zswap_never_enabled() should return true. The only effect of this issue is that with Barry's latest large folio swapin patches for zram ("mm: support mTHP swap-in for zRAM-like swapfile"), we will always fallback to order-0 swapin, even mistakenly when !CONFIG_ZSWAP. Basically this bug makes Barry's in progress patches not work at all. The API was created to inform the mm core that zswap has never been enabled, allowing the mm core to perform mTHP swap-in. This is a transitional solution until zswap supports mTHP. If zswap has been enabled, performing mTHP swap-in will result in corrupted data. You may find the answer in the mTHP swap-in series: https://lore.kernel.org/linux-mm/CAJD7tkZ4FQr6HZpduOdvmqgg_-whuZYE-Bz5O2t6yzw6Yg+v1A@mail.gmail.com/ Link: https://lkml.kernel.org/r/20240629232231.42394-1-21cnbao@gmail.com Fixes: 0300e17d67c3 ("mm: zswap: add zswap_never_enabled()") Signed-off-by: Barry Song <v-songbaohua@oppo.com> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev> Acked-by: Yosry Ahmed <yosryahmed@google.com> Acked-by: Chris Li <chrisl@kernel.org> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Nhat Pham <nphamcs@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-10mm: remove CONFIG_MEMCG_KMEMJohannes Weiner
CONFIG_MEMCG_KMEM used to be a user-visible option for whether slab tracking is enabled. It has been default-enabled and equivalent to CONFIG_MEMCG for almost a decade. We've only grown more kernel memory accounting sites since, and there is no imaginable cgroup usecase going forward that wants to track user pages but not the multitude of user-drivable kernel allocations. Link: https://lkml.kernel.org/r/20240701153148.452230-1-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Roman Gushchin <roman.gushchin@linux.dev> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: David Hildenbrand <david@redhat.com> Cc: Muchun Song <muchun.song@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-10mm: memcg: add cache line padding to mem_cgroup_per_nodeRoman Gushchin
Memcg v1-specific fields serve a buffer function between read-mostly and update often parts of the mem_cgroup_per_node structure. If CONFIG_MEMCG_V1 is not set and these fields are not present, an explicit cacheline padding is needed. Link: https://lkml.kernel.org/r/20240701185932.704807-2-roman.gushchin@linux.dev Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> Suggested-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-10mm: memcg: drop obsolete cache line padding in struct mem_cgroupRoman Gushchin
After the grouping of the cgroup v1-related fields and the corresponding reorganization of the struct mem_cgroup, the existing cache line padding doesn't make much sense anymore. Let's drop it for now and put back to new places, if necessary. Link: https://lkml.kernel.org/r/20240701185932.704807-1-roman.gushchin@linux.dev Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev> Suggested-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-10Merge tag 'bcachefs-2024-07-10' of https://evilpiepirate.org/git/bcachefsLinus Torvalds
Pull bcachefs fixes from Kent Overstreet: - Switch some asserts to WARN() - Fix a few "transaction not locked" asserts in the data read retry paths and backpointers gc - Fix a race that would cause the journal to get stuck on a flush commit - Add missing fsck checks for the fragmentation LRU - The usual assorted ssorted syzbot fixes * tag 'bcachefs-2024-07-10' of https://evilpiepirate.org/git/bcachefs: (22 commits) bcachefs: Add missing bch2_trans_begin() bcachefs: Fix missing error check in journal_entry_btree_keys_validate() bcachefs: Warn on attempting a move with no replicas bcachefs: bch2_data_update_to_text() bcachefs: Log mount failure error code bcachefs: Fix undefined behaviour in eytzinger1_first() bcachefs: Mark bch_inode_info as SLAB_ACCOUNT bcachefs: Fix bch2_inode_insert() race path for tmpfiles closures: fix closure_sync + closure debugging bcachefs: Fix journal getting stuck on a flush commit bcachefs: io clock: run timer fns under clock lock bcachefs: Repair fragmentation_lru in alloc_write_key() bcachefs: add check for missing fragmentation in check_alloc_to_lru_ref() bcachefs: bch2_btree_write_buffer_maybe_flush() bcachefs: Add missing printbuf_tabstops_reset() calls bcachefs: Fix loop restart in bch2_btree_transactions_read() bcachefs: Fix bch2_read_retry_nodecode() bcachefs: Don't use the new_fs() bucket alloc path on an initialized fs bcachefs: Fix shift greater than integer size bcachefs: Change bch2_fs_journal_stop() BUG_ON() to warning ...
2024-07-10regmap: Implement regmap_multi_reg_read()Guenter Roeck
regmap_multi_reg_read() is similar to regmap_bilk_read() but reads from an array of non-sequential registers. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20240710015622.1960522-2-linux@roeck-us.net Signed-off-by: Mark Brown <broonie@kernel.org>
2024-07-10firmware: cs_dsp: Rename fw_ver to wmfw_verRichard Fitzgerald
Rename the confusingly named struct member fw_ver to wmfw_ver. It contains the wmfw format version of the loaded wmfw file. This commit also contains an update to wm_adsp for the new name. Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com> Reviewed-by: Charles Keepax <ckeepax@opensource.cirrus.com> Link: https://lore.kernel.org/r/20240710103640.78197-5-rf@opensource.cirrus.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-07-10firmware: cs_dsp: Make wmfw and bin filename arguments const char *Richard Fitzgerald
The wmfw_filename and bin_filename strings passed into cs_dsp_power_up() and cs_dsp_adsp1_power_up() should be const char *. Signed-off-by: Richard Fitzgerald <rf@opensource.cirrus.com> Reviewed-by: Charles Keepax <ckeepax@opensource.cirrus.com> Link: https://lore.kernel.org/r/20240710103640.78197-3-rf@opensource.cirrus.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-07-10cache: add __cacheline_group_{begin, end}_aligned() (+ couple more)Alexander Lobakin
__cacheline_group_begin(), unfortunately, doesn't align the group anyhow. If it is wanted, then you need to do something like __cacheline_group_begin(grp) __aligned(ALIGN) which isn't really convenient nor compact. Add the _aligned() counterparts to align the groups automatically to either the specified alignment (optional) or ``SMP_CACHE_BYTES``. Note that the actual struct layout will then be (on x64 with 64-byte CL): struct x { u32 y; // offset 0, size 4, padding 56 __cacheline_group_begin__grp; // offset 64, size 0 u32 z; // offset 64, size 4, padding 4 __cacheline_group_end__grp; // offset 72, size 0 __cacheline_group_pad__grp; // offset 72, size 0, padding 56 u32 w; // offset 128 }; The end marker is aligned to long, so that you can assert the struct size more strictly, but the offset of the next field in the structure will be aligned to the group alignment, so that the next field won't fall into the group it's not intended to. Add __LARGEST_ALIGN definition and LARGEST_ALIGN() macro. __LARGEST_ALIGN is the value to which the compilers align fields when __aligned_largest is specified. Sometimes, it might be needed to get this value outside of variable definitions. LARGEST_ALIGN() is macro which just aligns a value to __LARGEST_ALIGN. Also add SMP_CACHE_ALIGN(), similar to L1_CACHE_ALIGN(), but using ``SMP_CACHE_BYTES`` instead of ``L1_CACHE_BYTES`` as the former also accounts L2, needed in some cases. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10Merge tag 'ib-mfd-counter-v5.11' of ↵Uwe Kleine-König
https://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd into HEAD Immutable branch between MFD and Counter due for the v5.11 merge window
2024-07-10pwm: Drop pwm_apply_state()Uwe Kleine-König
This function is not supposed to be used any more since commit c748a6d77c06 ("pwm: Rename pwm_apply_state() to pwm_apply_might_sleep()") that is included in v6.8-rc1. Two kernel releases should be enough for everyone to adapt, so drop the old function that was introduced as a compatibility stub for the transition. Signed-off-by: Uwe Kleine-König <ukleinek@kernel.org>
2024-07-10pwm: Remove wrong implementation details from pwm_ops's documentationUwe Kleine-König
When .get_state() is called is an implementation detail that implementors and users shouldn't care about and rely on. Additionally it's wrong, because with PWM_DEBUG enabled it is called more often. Just drop the wrong statement. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Link: https://lore.kernel.org/r/611ba758d7e9fb2695e96b23cb7ceeefb6ba8513.1717756902.git.u.kleine-koenig@baylibre.com Signed-off-by: Uwe Kleine-König <ukleinek@kernel.org>
2024-07-10pwm: Make pwm_request_from_chip() private to the coreUwe Kleine-König
The last user of this function outside of core.c is gone, so it can be made static. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Reviewed-by: Tzung-Bi Shih <tzungbi@kernel.org> Link: https://lore.kernel.org/r/20240607084416.897777-8-u.kleine-koenig@baylibre.com Signed-off-by: Uwe Kleine-König <ukleinek@kernel.org>
2024-07-10pwm: Make use of a symbol namespace for the coreUwe Kleine-König
Define all pwm core's symbols in the namespace "PWM". The necessary module import statement is just added to the main header, this way every file that knows about the public functions automatically has this namespace available. Thanks to Biju Das for pointing out a cut'n'paste failure in my initial patch. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Link: https://lore.kernel.org/r/20240607160012.1206874-2-u.kleine-koenig@baylibre.com Signed-off-by: Uwe Kleine-König <ukleinek@kernel.org>
2024-07-10closures: fix closure_sync + closure debuggingKent Overstreet
originally, stack closures were only used synchronously, and with the original implementation of closure_sync() the ref never hit 0; thus, closure_put_after_sub() assumes that if the ref hits 0 it's on the debug list, in debug mode. that's no longer true with the current implementation of closure_sync, so we need a new magic so closure_debug_destroy() doesn't pop an assert. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-10dio: Have dio_bus_match() callback take a const *Geert Uytterhoeven
drivers/dio/dio-driver.c:128:11: error: initialization of ‘int (*)(struct device *, const struct device_driver *)’ from incompatible pointer type ‘int (*)(struct device *, struct device_driver *)’ [-Werror=incompatible-pointer-types] 128 | .match = dio_bus_match, | ^~~~~~~~~~~~~ drivers/dio/dio-driver.c:128:11: note: (near initialization for ‘dio_bus_type.match’) Reported-by: noreply@ellerman.id.au Fixes: d69d804845985c29 ("driver core: have match() callback in struct bus_type take a const *") Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/r/20240710074452.2841173-1-geert@linux-m68k.org [ added dio.h change - gregkh ] Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-07-10USB: core: add 'shutdown' callback to usb_driverKerem Karabay
Currently there is no standardized method for USB drivers to handle shutdown events. This patch simplifies running code on shutdown for USB devices by adding a shutdown callback to usb_driver. Signed-off-by: Kerem Karabay <kekrby@gmail.com> Signed-off-by: Aditya Garg <gargaditya08@live.com> Link: https://lore.kernel.org/r/7AAC1BF4-8B60-448D-A3C1-B7E80330BE42@live.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-07-10usb: gadget: Use u16 types for 16-bit fieldsKees Cook
Since the beginning of time, struct usb_ep::maxpacket was a bitfield, and when new 16-bit members were added, the convention was followed: 1da177e4c3f41 (Linus Torvalds 2005-04-16 236) unsigned maxpacket:16; e117e742d3106 (Robert Baldyga 2013-12-13 237) unsigned maxpacket_limit:16; a59d6b91cbca5 (Tatyana Brokhman 2011-06-28 238) unsigned max_streams:16; However, there is no need for this as a simple u16 can be used instead, simplifying the struct and the resulting compiler binary output. Switch to u16 for all three, and rearrange struct slightly to minimize holes. No change in the final size of the struct results; the 2 byte gap is just moved to the end, as seen with pahole: - /* XXX 2 bytes hole, try to pack */ ... /* size: 72, cachelines: 2, members: 15 */ ... + /* padding: 2 */ Changing this simplifies future introspection[1] of maxpacket's type during allocations: drivers/usb/gadget/function/f_tcm.c:330:24: error: 'typeof' applied to a bit-field 330 | fu->cmd.buf = kmalloc(fu->ep_out->maxpacket, GFP_KERNEL); Link: https://lore.kernel.org/all/202407090928.6UaOAZAJ-lkp@intel.com [1] Signed-off-by: Kees Cook <kees@kernel.org> Link: https://lore.kernel.org/r/20240709154953.work.953-kees@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-07-10dm: factor out helper function from dm_get_deviceBenjamin Marzinski
Factor out a helper function, dm_devt_from_path(), from dm_get_device() for use in dm targets. Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2024-07-10dm: Remove max_secure_erase_granularityDamien Le Moal
The max_secure_erase_granularity boolean of struct dm_target is used in __process_abnormal_io() but never set by any target. Remove this field and the dead code using it. Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2024-07-10dm: Remove max_write_zeroes_granularityDamien Le Moal
The max_write_zeroes_granularity boolean of struct dm_target is used in __process_abnormal_io() but never set by any target. Remove this field and the dead code using it. Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
2024-07-10Merge back cpufreq material for 6.11.Rafael J. Wysocki
2024-07-10zorro: make match function take a const pointerGreg Kroah-Hartman
In commit d69d80484598 ("driver core: have match() callback in struct bus_type take a const *"), the match callback for busses was changed to take a const pointer to struct device_driver. Unfortunately I missed fixing up the zorro code, and was only noticed after-the-fact by the kernel test robot. Resolve this issue by properly changing the zorro_bus_match() function. Cc: Geert Uytterhoeven <geert@linux-m68k.org> Fixes: d69d80484598 ("driver core: have match() callback in struct bus_type take a const *") Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/r/20240710073413.495541-2-gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-07-10driver core: make driver_find_device() take a const *Greg Kroah-Hartman
The function driver_find_device() does not modify the struct device_driver structure directly, so it is safe to be marked as a constant pointer type. As that is fixed up, also change the function signature on the inline functions that call this, which are: driver_find_device_by_name() driver_find_device_by_of_node() driver_find_device_by_devt() driver_find_next_device() driver_find_device_by_acpi_dev() Cc: "Rafael J. Wysocki" <rafael@kernel.org> Link: https://lore.kernel.org/r/2024070849-broken-front-9eb5@gregkh Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-07-10driver core: make driver_[create|remove]_file take a const *Greg Kroah-Hartman
The functions driver_create_file() and driver_remove_file() do not modify the struct device_driver structure directly, so they are safe to be marked as a constant pointer type. Cc: "Rafael J. Wysocki" <rafael@kernel.org> Link: https://lore.kernel.org/r/2024070844-volley-hatchling-c812@gregkh Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2024-07-10swiotlb: reduce swiotlb pool lookupsMichael Kelley
With CONFIG_SWIOTLB_DYNAMIC enabled, each round-trip map/unmap pair in the swiotlb results in 6 calls to swiotlb_find_pool(). In multiple places, the pool is found and used in one function, and then must be found again in the next function that is called because only the tlb_addr is passed as an argument. These are the six call sites: dma_direct_map_page: 1. swiotlb_map -> swiotlb_tbl_map_single -> swiotlb_bounce dma_direct_unmap_page: 2. dma_direct_sync_single_for_cpu -> is_swiotlb_buffer 3. dma_direct_sync_single_for_cpu -> swiotlb_sync_single_for_cpu -> swiotlb_bounce 4. is_swiotlb_buffer 5. swiotlb_tbl_unmap_single -> swiotlb_del_transient 6. swiotlb_tbl_unmap_single -> swiotlb_release_slots Reduce the number of calls by finding the pool at a higher level, and passing it as an argument instead of searching again. A key change is for is_swiotlb_buffer() to return a pool pointer instead of a boolean, and then pass this pool pointer to subsequent swiotlb functions. There are 9 occurrences of is_swiotlb_buffer() used to test if a buffer is a swiotlb buffer before calling a swiotlb function. To reduce code duplication in getting the pool pointer and passing it as an argument, introduce inline wrappers for this pattern. The generated code is essentially unchanged. Since is_swiotlb_buffer() no longer returns a boolean, rename some functions to reflect the change: * swiotlb_find_pool() becomes __swiotlb_find_pool() * is_swiotlb_buffer() becomes swiotlb_find_pool() * is_xen_swiotlb_buffer() becomes xen_swiotlb_find_pool() With these changes, a round-trip map/unmap pair requires only 2 pool lookups (listed using the new names and wrappers): dma_direct_unmap_page: 1. dma_direct_sync_single_for_cpu -> swiotlb_find_pool 2. swiotlb_tbl_unmap_single -> swiotlb_find_pool These changes come from noticing the inefficiencies in a code review, not from performance measurements. With CONFIG_SWIOTLB_DYNAMIC, __swiotlb_find_pool() is not trivial, and it uses an RCU read lock, so avoiding the redundant calls helps performance in a hot path. When CONFIG_SWIOTLB_DYNAMIC is *not* set, the code size reduction is minimal and the perf benefits are likely negligible, but no harm is done. No functional change is intended. Signed-off-by: Michael Kelley <mhklinux@outlook.com> Reviewed-by: Petr Tesarik <petr@tesarici.cz> Signed-off-by: Christoph Hellwig <hch@lst.de>
2024-07-10PCI: Move struct pci_devres.pinned bit to struct pci_devPhilipp Stanner
The bit describing whether the PCI device is currently pinned is stored in struct pci_devres. To clean up and simplify the PCI devres API, it's better if this information is stored in struct pci_dev. This will later permit simplifying pcim_enable_device(). Move the 'pinned' boolean bit to struct pci_dev. Restructure bits in struct pci_dev so the pm / pme fields are next to each other. Link: https://lore.kernel.org/r/20240613115032.29098-9-pstanner@redhat.com Signed-off-by: Philipp Stanner <pstanner@redhat.com> Signed-off-by: Krzysztof Wilczyński <kwilczynski@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2024-07-09sched.h: always_inline alloc_tag_{save|restore} to fix modpost warningsSuren Baghdasaryan
Mark alloc_tag_{save|restore} as always_inline to fix the following modpost warnings: WARNING: modpost: vmlinux: section mismatch in reference: alloc_tag_save+0x1c (section: .text.unlikely) -> initcall_level_names (section: .init.data) WARNING: modpost: vmlinux: section mismatch in reference: alloc_tag_restore+0x3c (section: .text.unlikely) -> initcall_level_names (section: .init.data) The warnings happen when these functions are called from an __init function and they don't get inlined (remain in the .text section) while the value returned by get_current() points into .init.data section. Assuming get_current() always returns a valid address, this situation can happen only during init stage and accessing .init.data from .text section during that stage should pose no issues. Link: https://lkml.kernel.org/r/20240704132506.1011978-1-surenb@google.com Fixes: 22d407b164ff ("lib: add allocation tagging support for memory allocation profiling") Signed-off-by: Suren Baghdasaryan <surenb@google.com> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202407032306.gi9nZsBi-lkp@intel.com/ Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Chris Zankel <chris@zankel.net> Cc: Ingo Molnar <mingo@redhat.com> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-07-09Merge branch 'iommufd_pri' into iommufd for-nextJason Gunthorpe
Lu Baolu says: ==================== This series implements the functionality of delivering IO page faults to user space through the IOMMUFD framework. One feasible use case is the nested translation. Nested translation is a hardware feature that supports two-stage translation tables for IOMMU. The second-stage translation table is managed by the host VMM, while the first-stage translation table is owned by user space. This allows user space to control the IOMMU mappings for its devices. When an IO page fault occurs on the first-stage translation table, the IOMMU hardware can deliver the page fault to user space through the IOMMUFD framework. User space can then handle the page fault and respond to the device top-down through the IOMMUFD. This allows user space to implement its own IO page fault handling policies. User space application that is capable of handling IO page faults should allocate a fault object, and bind the fault object to any domain that it is willing to handle the fault generatd for them. On a successful return of fault object allocation, the user can retrieve and respond to page faults by reading or writing to the file descriptor (FD) returned. The iommu selftest framework has been updated to test the IO page fault delivery and response functionality. ==================== * iommufd_pri: iommufd/selftest: Add coverage for IOPF test iommufd/selftest: Add IOPF support for mock device iommufd: Associate fault object with iommufd_hw_pgtable iommufd: Fault-capable hwpt attach/detach/replace iommufd: Add iommufd fault object iommufd: Add fault and response message definitions iommu: Extend domain attach group with handle support iommu: Add attach handle to struct iopf_group iommu: Remove sva handle list iommu: Introduce domain attachment handle Link: https://lore.kernel.org/all/20240702063444.105814-1-baolu.lu@linux.intel.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-07-09iommufd: Add iommufd fault objectLu Baolu
An iommufd fault object provides an interface for delivering I/O page faults to user space. These objects are created and destroyed by user space, and they can be associated with or dissociated from hardware page table objects during page table allocation or destruction. User space interacts with the fault object through a file interface. This interface offers a straightforward and efficient way for user space to handle page faults. It allows user space to read fault messages sequentially and respond to them by writing to the same file. The file interface supports reading messages in poll mode, so it's recommended that user space applications use io_uring to enhance read and write efficiency. A fault object can be associated with any iopf-capable iommufd_hw_pgtable during the pgtable's allocation. All I/O page faults triggered by devices when accessing the I/O addresses of an iommufd_hw_pgtable are routed through the fault object to user space. Similarly, user space's responses to these page faults are routed back to the iommu device driver through the same fault object. Link: https://lore.kernel.org/r/20240702063444.105814-7-baolu.lu@linux.intel.com Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2024-07-09spi: add defer_optimize_message controller flagDavid Lechner
Adding spi_optimize_message() broke the spi-mux driver because it calls spi_async() from it's transfer_one_message() callback. This resulted in passing an incorrectly optimized message to the controller. For example, if the underlying controller has an optimize_message() callback, this would have not been called and can cause a crash when the underlying controller driver tries to transfer the message. Also, since the spi-mux driver swaps out the controller pointer by replacing msg->spi, __spi_unoptimize_message() was being called with a different controller than the one used in __spi_optimize_message(). This could cause a crash when attempting to free the message resources when __spi_unoptimize_message() is called in spi_finalize_current_message() since it is being called with a controller that did not allocate the resources. This is fixed by adding a defer_optimize_message flag for controllers. This flag causes all of the spi_[maybe_][un]optimize_message() calls to be a no-op (other than attaching a pointer to the spi device to the message). This allows the spi-mux driver to pass an unmodified message to spi_async() in spi_mux_transfer_one_message() after the spi device has been swapped out. This causes __spi_optimize_message() and __spi_unoptimize_message() to be called only once per message and with the correct/same controller in each case. Reported-by: Oleksij Rempel <o.rempel@pengutronix.de> Closes: https://lore.kernel.org/linux-spi/Zn6HMrYG2b7epUxT@pengutronix.de/ Reported-by: Marc Kleine-Budde <mkl@pengutronix.de> Closes: https://lore.kernel.org/linux-spi/20240628-awesome-discerning-bear-1621f9-mkl@pengutronix.de/ Fixes: 7b1d87af14d9 ("spi: add spi_optimize_message() APIs") Signed-off-by: David Lechner <dlechner@baylibre.com> Link: https://patch.msgid.link/20240708-spi-mux-fix-v1-2-6c8845193128@baylibre.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-07-09thermal: trip: Add conversion macros for thermal trip priv fieldRafael J. Wysocki
Some drivers will need to store integers in the priv field of struct thermal_trip, so add conversion macros for doing this in a consistent way and switch over the int340x_thermal driver that already does it and uses custom conversion functions to using the new macros. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://patch.msgid.link/3297884.aeNJFYEL58@rjwysocki.net
2024-07-09thermal: helpers: Introduce thermal_trip_is_bound_to_cdev()Rafael J. Wysocki
Introduce a new helper function thermal_trip_is_bound_to_cdev() for checking whether or not a given trip point has been bound to a given cooling device. The primary user of it will be the Tegra thermal driver. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://patch.msgid.link/13545762.uLZWGnKmhe@rjwysocki.net
2024-07-09thermal: core: Change passive_delay and polling_delay data typeRafael J. Wysocki
It is better to use unsigned int as the data type for the passive_delay and polling_delay arguments of thermal_zone_device_register_with_trips() because they are implicitly cast to unsigned int anyway in thermal_set_delay_jiffies() and if they happen to be negative at that point, the resulting behavior may not be as desired. Update the thermal_zone_device_register_with_trips() definition accordingly. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://patch.msgid.link/5803791.DvuYhMxLoT@rjwysocki.net Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-07-09Merge tag 'opp-updates-6.11' of ↵Rafael J. Wysocki
ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/vireshk/pm into pm-opp Merge OPP Updates for 6.11 from Viresh Kumar: "- Introduce an OF helper function to inform if required-opps is used (Ulf Hansson). - Generic cleanups (Ulf Hansson and Viresh Kumar)." * tag 'opp-updates-6.11' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/vireshk/pm: OPP: Introduce an OF helper function to inform if required-opps is used OPP: Drop a redundant in-parameter to _set_opp_level() OPP: Fix missing cleanup on error in _opp_attach_genpd()
2024-07-09Merge tag 'cpufreq-arm-updates-6.11' of ↵Rafael J. Wysocki
ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/vireshk/pm Merge ARM cpufreq updates for 6.11 from Viresh Kumar: "- cpufreq: Add Loongson-3 CPUFreq driver support (Huacai Chen). - Make exit() callback return void (Lizhe and Viresh Kumar). - Minor cleanups and fixes in several drivers (Bryan Brattlof, Javier Carrasco, Jagadeesh Kona, Jeff Johnson, Nícolas F. R. A. Prado, Primoz Fiser, Raphael Gallais-Pou, and Riwen Lu)." * tag 'cpufreq-arm-updates-6.11' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/vireshk/pm: (21 commits) cpufreq: sti: fix build warning cpufreq: mediatek: Use dev_err_probe in every error path in probe cpufreq: Add Loongson-3 CPUFreq driver support cpufreq: Make cpufreq_driver->exit() return void cpufreq: pcc: Remove empty exit() callback cpufreq: loongson2: Remove empty exit() callback cpufreq: nforce2: Remove empty exit() callback cpufreq: sti: add missing MODULE_DEVICE_TABLE entry for stih418 cpufreq: ti: update OPP table for AM62Px SoCs cpufreq: ti: update OPP table for AM62Ax SoCs cpufreq: sun50i: add Allwinner H700 speed bin cpufreq/cppc: Don't compare desired_perf in target() OPP: ti: Fix ti_opp_supply_probe wrong return values cpufreq: ti-cpufreq: Handle deferred probe with dev_err_probe() cpufreq: dt-platdev: add missing MODULE_DESCRIPTION() macro cpufreq: longhaul: Fix kernel-doc param for longhaul_setstate cpufreq: qcom-nvmem: eliminate uses of of_node_put() cpufreq: qcom-nvmem: fix memory leaks in probe error paths cpufreq: scmi: Avoid overflow of target_freq in fast switch cpufreq: sun50i: replace of_node_put() with automatic cleanup handler ...
2024-07-09Merge tag 'for-netdev' of ↵Paolo Abeni
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Daniel Borkmann says: ==================== pull-request: bpf-next 2024-07-08 The following pull-request contains BPF updates for your *net-next* tree. We've added 102 non-merge commits during the last 28 day(s) which contain a total of 127 files changed, 4606 insertions(+), 980 deletions(-). The main changes are: 1) Support resilient split BTF which cuts down on duplication and makes BTF as compact as possible wrt BTF from modules, from Alan Maguire & Eduard Zingerman. 2) Add support for dumping kfunc prototypes from BTF which enables both detecting as well as dumping compilable prototypes for kfuncs, from Daniel Xu. 3) Batch of s390x BPF JIT improvements to add support for BPF arena and to implement support for BPF exceptions, from Ilya Leoshkevich. 4) Batch of riscv64 BPF JIT improvements in particular to add 12-argument support for BPF trampolines and to utilize bpf_prog_pack for the latter, from Pu Lehui. 5) Extend BPF test infrastructure to add a CHECKSUM_COMPLETE validation option for skbs and add coverage along with it, from Vadim Fedorenko. 6) Inline bpf_get_current_task/_btf() helpers in the arm64 BPF JIT which gives a small 1% performance improvement in micro-benchmarks, from Puranjay Mohan. 7) Extend the BPF verifier to track the delta between linked registers in order to better deal with recent LLVM code optimizations, from Alexei Starovoitov. 8) Fix bpf_wq_set_callback_impl() kfunc signature where the third argument should have been a pointer to the map value, from Benjamin Tissoires. 9) Extend BPF selftests to add regular expression support for test output matching and adjust some of the selftest when compiled under gcc, from Cupertino Miranda. 10) Simplify task_file_seq_get_next() and remove an unnecessary loop which always iterates exactly once anyway, from Dan Carpenter. 11) Add the capability to offload the netfilter flowtable in XDP layer through kfuncs, from Florian Westphal & Lorenzo Bianconi. 12) Various cleanups in networking helpers in BPF selftests to shave off a few lines of open-coded functions on client/server handling, from Geliang Tang. 13) Properly propagate prog->aux->tail_call_reachable out of BPF verifier, so that x86 JIT does not need to implement detection, from Leon Hwang. 14) Fix BPF verifier to add a missing check_func_arg_reg_off() to prevent an out-of-bounds memory access for dynpointers, from Matt Bobrowski. 15) Fix bpf_session_cookie() kfunc to return __u64 instead of long pointer as it might lead to problems on 32-bit archs, from Jiri Olsa. 16) Enhance traffic validation and dynamic batch size support in xsk selftests, from Tushar Vyavahare. bpf-next-for-netdev * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (102 commits) selftests/bpf: DENYLIST.aarch64: Remove fexit_sleep selftests/bpf: amend for wrong bpf_wq_set_callback_impl signature bpf: helpers: fix bpf_wq_set_callback_impl signature libbpf: Add NULL checks to bpf_object__{prev_map,next_map} selftests/bpf: Remove exceptions tests from DENYLIST.s390x s390/bpf: Implement exceptions s390/bpf: Change seen_reg to a mask bpf: Remove unnecessary loop in task_file_seq_get_next() riscv, bpf: Optimize stack usage of trampoline bpf, devmap: Add .map_alloc_check selftests/bpf: Remove arena tests from DENYLIST.s390x selftests/bpf: Add UAF tests for arena atomics selftests/bpf: Introduce __arena_global s390/bpf: Support arena atomics s390/bpf: Enable arena s390/bpf: Support address space cast instruction s390/bpf: Support BPF_PROBE_MEM32 s390/bpf: Land on the next JITed instruction after exception s390/bpf: Introduce pre- and post- probe functions s390/bpf: Get rid of get_probe_mem_regno() ... ==================== Link: https://patch.msgid.link/20240708221438.10974-1-daniel@iogearbox.net Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-07-09Merge tag 'zynqmp-soc2-for-6.11' of https://github.com/Xilinx/linux-xlnx ↵Arnd Bergmann
into soc/drivers arm64: Xilinx SoC changes for 6.11 Timer - Fix u32 overflow issue in 32-bit width PWM mode. Event manager: - rename cpu_number1 to dummy_cpu_number Power: - Add cb event for subsystem restart - check return status of get_api_version() Firmware: - Move FIRMWARE_VERSION_MASK to xlnx-zynqmp.h * tag 'zynqmp-soc2-for-6.11' of https://github.com/Xilinx/linux-xlnx: drivers: soc: xilinx: check return status of get_api_version() firmware: xilinx: Move FIRMWARE_VERSION_MASK to xlnx-zynqmp.h soc: xilinx: Add cb event for subsystem restart soc: xilinx: rename cpu_number1 to dummy_cpu_number pwm: xilinx: Fix u32 overflow issue in 32-bit width PWM mode. Link: https://lore.kernel.org/r/CAHTX3dKMtqgNpkEvrw0p2w+SPN83Ai1_kzhefUGOO5rMkPaH_w@mail.gmail.com Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-07-09ARM: spitz: Use software nodes to describe MMC GPIOsDmitry Torokhov
Convert Spitz to use software nodes for specifying GPIOs for the MMC. Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20240628180852.1738922-9-dmitry.torokhov@gmail.com Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-07-09vdpa/mlx5: Add support for modifying the VQ features fieldDragos Tatulea
This is done in preparation for the pre-creation of hardware virtqueues at device add time. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Message-Id: <20240626-stage-vdpa-vq-precreate-v2-11-560c491078df@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-07-09vdpa/mlx5: Add support for modifying the virtio_version VQ fieldDragos Tatulea
This is done in preparation for the pre-creation of hardware virtqueues at device add time. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Message-Id: <20240626-stage-vdpa-vq-precreate-v2-10-560c491078df@nvidia.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2024-07-09perf: Split __perf_pending_irq() out of perf_pending_irq()Sebastian Andrzej Siewior
perf_pending_irq() invokes perf_event_wakeup() and __perf_pending_irq(). The former is in charge of waking any tasks which waits to be woken up while the latter disables perf-events. The irq_work perf_pending_irq(), while this an irq_work, the callback is invoked in thread context on PREEMPT_RT. This is needed because all the waking functions (wake_up_all(), kill_fasync()) acquire sleep locks which must not be used with disabled interrupts. Disabling events, as done by __perf_pending_irq(), expects a hardirq context and disabled interrupts. This requirement is not fulfilled on PREEMPT_RT. Split functionality based on perf_event::pending_disable into irq_work named `pending_disable_irq' and invoke it in hardirq context on PREEMPT_RT. Rename the split out callback to perf_pending_disable(). Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Marco Elver <elver@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20240704170424.1466941-8-bigeasy@linutronix.de
2024-07-09perf: Move swevent_htable::recursion into task_struct.Sebastian Andrzej Siewior
The swevent_htable::recursion counter is used to avoid creating an swevent while an event is processed to avoid recursion. The counter is per-CPU and preemption must be disabled to have a stable counter. perf_pending_task() disables preemption to access the counter and then signal. This is problematic on PREEMPT_RT because sending a signal uses a spinlock_t which must not be acquired in atomic on PREEMPT_RT because it becomes a sleeping lock. The atomic context can be avoided by moving the counter into the task_struct. There is a 4 byte hole between futex_state (usually always on) and the following perf pointer (perf_event_ctxp). After the recursion lost some weight it fits perfectly. Move swevent_htable::recursion into task_struct. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Marco Elver <elver@google.com> Link: https://lore.kernel.org/r/20240704170424.1466941-6-bigeasy@linutronix.de
2024-07-09perf: Enqueue SIGTRAP always via task_work.Sebastian Andrzej Siewior
A signal is delivered by raising irq_work() which works from any context including NMI. irq_work() can be delayed if the architecture does not provide an interrupt vector. In order not to lose a signal, the signal is injected via task_work during event_sched_out(). Instead going via irq_work, the signal could be added directly via task_work. The signal is sent to current and can be enqueued on its return path to userland. Queue signal via task_work and consider possible NMI context. Remove perf_event::pending_sigtrap and and use perf_event::pending_work instead. Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Marco Elver <elver@google.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/r/20240704170424.1466941-4-bigeasy@linutronix.de
2024-07-09task_work: Add TWA_NMI_CURRENT as an additional notify mode.Sebastian Andrzej Siewior
Adding task_work from NMI context requires the following: - The kasan_record_aux_stack() is not NMU safe and must be avoided. - Using TWA_RESUME is NMI safe. If the NMI occurs while the CPU is in userland then it will continue in userland and not invoke the `work' callback. Add TWA_NMI_CURRENT as an additional notify mode. In this mode skip kasan and use irq_work in hardirq-mode to for needed interrupt. Set TIF_NOTIFY_RESUME within the irq_work callback due to k[ac]san instrumentation in test_and_set_bit() which does not look NMI safe in case of a report. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20240704170424.1466941-3-bigeasy@linutronix.de
2024-07-09perf: Fix event leak upon exec and file releaseFrederic Weisbecker
The perf pending task work is never waited upon the matching event release. In the case of a child event, released via free_event() directly, this can potentially result in a leaked event, such as in the following scenario that doesn't even require a weak IRQ work implementation to trigger: schedule() prepare_task_switch() =======> <NMI> perf_event_overflow() event->pending_sigtrap = ... irq_work_queue(&event->pending_irq) <======= </NMI> perf_event_task_sched_out() event_sched_out() event->pending_sigtrap = 0; atomic_long_inc_not_zero(&event->refcount) task_work_add(&event->pending_task) finish_lock_switch() =======> <IRQ> perf_pending_irq() //do nothing, rely on pending task work <======= </IRQ> begin_new_exec() perf_event_exit_task() perf_event_exit_event() // If is child event free_event() WARN(atomic_long_cmpxchg(&event->refcount, 1, 0) != 1) // event is leaked Similar scenarios can also happen with perf_event_remove_on_exec() or simply against concurrent perf_event_release(). Fix this with synchonizing against the possibly remaining pending task work while freeing the event, just like is done with remaining pending IRQ work. This means that the pending task callback neither need nor should hold a reference to the event, preventing it from ever beeing freed. Fixes: 517e6a301f34 ("perf: Fix perf_pending_task() UaF") Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20240621091601.18227-5-frederic@kernel.org