summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-09-26btrfs: scrub: introduce scrub_block::pages for more efficient memory usage ↵Qu Wenruo
for subpage [BACKGROUND] Currently for scrub, we allocate one page for one sector, this is fine for PAGE_SIZE == sectorsize support, but can waste extra memory for subpage support. [CODE CHANGE] Make scrub_block contain all the pages, so if we're scrubbing an extent sized 64K, and our page size is also 64K, we only need to allocate one page. [LIFESPAN CHANGE] Since now scrub_sector no longer holds a page, but is using scrub_block::pages[] instead, we have to ensure scrub_block has a longer lifespan for write bio. The lifespan for read bio is already large enough. Now scrub_block will only be released after the write bio finished. [COMING NEXT] Currently we only added scrub_block::pages[] for this purpose, but scrub_sector is still utilizing the old scrub_sector::page. The switch will happen in the next patch. Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: scrub: factor out allocation and initialization of scrub_sector into ↵Qu Wenruo
helper The allocation and initialization is shared by 3 call sites, and we're going to change the initialization of some members in the upcoming patches. So factor out the allocation and initialization of scrub_sector into a helper, alloc_scrub_sector(), which will do the following work: - Allocate the memory for scrub_sector - Allocate a page for scrub_sector::page - Initialize scrub_sector::refs to 1 - Attach the allocated scrub_sector to scrub_block The attachment is bidirectional, which means scrub_block::sectorv[] will be updated and scrub_sector::sblock will also be updated. - Update scrub_block::sector_count and do extra sanity check on it Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: scrub: factor out initialization of scrub_block into helperQu Wenruo
Although there are only two callers, we are going to add some members for scrub_block in the incoming patches. Factoring out the initialization code will make later expansion easier. One thing to note is, even scrub_handle_errored_block() doesn't utilize scrub_block::refs, we still use alloc_scrub_block() to initialize sblock::ref, allowing us to use scrub_block_put() to do cleanup. Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: scrub: use pointer array to replace sblocks_for_recheckQu Wenruo
In function scrub_handle_errored_block(), we use @sblocks_for_recheck pointer to hold one scrub_block for each mirror, and uses kcalloc() to allocate an array. But this one pointer for an array is not readable due to the member offsets done by addition and not []. Change this pointer to struct scrub_block *[BTRFS_MAX_MIRRORS], this will slightly increase the stack memory usage. Since function scrub_handle_errored_block() won't get iterative calls, this extra cost would completely be acceptable. And since we're here, also set sblock->refs and use scrub_block_put() to clean them up, as later we will add extra members in scrub_block, which needs scrub_block_put() to clean them up. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: send: add support for fs-verityBoris Burkov
Preserve the fs-verity status of a btrfs file across send/recv. There is no facility for installing the Merkle tree contents directly on the receiving filesystem, so we package up the parameters used to enable verity found in the verity descriptor. This gives the receive side enough information to properly enable verity again. Note that this means that receive will have to re-compute the whole Merkle tree, similar to how compression worked before encoded_write. Since the file becomes read-only after verity is enabled, it is important that verity is added to the send stream after any file writes. Therefore, when we process a verity item, merely note that it happened, then actually create the command in the send stream during 'finish_inode_if_needed'. This also creates V3 of the send stream format, without any format changes besides adding the new commands and attributes. Signed-off-by: Boris Burkov <boris@bur.io> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: use atomic_try_cmpxchg in free_extent_bufferUros Bizjak
Use `atomic_try_cmpxchg(ptr, &old, new)` instead of `atomic_cmpxchg(ptr, old, new) == old` in free_extent_buffer. This has two benefits: - The x86 cmpxchg instruction returns success in the ZF flag, so this change saves a compare after cmpxchg, as well as a related move instruction in the front of cmpxchg. - atomic_try_cmpxchg implicitly assigns the *ptr value to &old when cmpxchg fails, enabling further code simplifications. This patch has no functional change. Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: scrub: remove impossible sanity checksQu Wenruo
There are several sanity checks which are no longer possible to trigger inside btrfs_scrub_dev(). Since we have mount time check against super block nodesize/sectorsize, and our fixed macro is hardcoded to handle even the worst combination. Thus those sanity checks are no longer needed, can be easily removed. But this patch still uses some ASSERT()s as a safe net just in case we change some features in the future to trigger those impossible combinations. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: delete btrfs_wait_space_cache_v1_finishedJosef Bacik
We used to use this in a few spots, but now we only use it directly inside of block-group.c, so remove the helper and just open code where we were using it. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: remove lock protection for BLOCK_GROUP_FLAG_RELOCATING_REPAIRJosef Bacik
Before when this was modifying the bit field we had to protect it with the bg->lock, however now we're using bit helpers so we can stop using the bg->lock. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: remove BLOCK_GROUP_FLAG_HAS_CACHING_CTLJosef Bacik
This is used mostly to determine if we need to look at the caching ctl list and clean up any references to this block group. However we never clear this flag, specifically because we need to know if we have to remove a caching ctl we have for this block group still. This is in the remove block group path which isn't a fast path, so the optimization doesn't really matter, simplify this logic and remove the flag. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: simplify block group traversal in btrfs_put_block_group_cacheJosef Bacik
We're breaking out and re-searching for the next block group while evicting any of the block group cache inodes. This is not needed, the block groups aren't disappearing here, we can simply loop through the block groups like normal and iput any inode that we find. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: remove lock protection for BLOCK_GROUP_FLAG_TO_COPYJosef Bacik
We use this during device replace for zoned devices, we were simply taking the lock because it was in a bit field and we needed the lock to be safe with other modifications in the bitfield. With the bit helpers we no longer require that locking. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: convert block group bit field to use bit helpersJosef Bacik
We use a bit field in the btrfs_block_group for different flags, however this is awkward because we have to hold the block_group->lock for any modification of any of these fields, and makes the code clunky for a few of these flags. Convert these to a properly flags setup so we can utilize the bit helpers. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: handle space_info setting of bg in btrfs_add_bg_to_space_infoJosef Bacik
We previously had the pattern of btrfs_update_space_info(all, the, bg, fields, &space_info); link_block_group(bg); bg->space_info = space_info; Now that we're passing the bg into btrfs_add_bg_to_space_info we can do the linking in that function, transforming this to simply btrfs_add_bg_to_space_info(fs_info, bg); and put the link_block_group() and bg->space_info assignment directly in btrfs_add_bg_to_space_info. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: simplify arguments of btrfs_update_space_info and renameJosef Bacik
This function has grown a bunch of new arguments, and it just boils down to passing in all the block group fields as arguments. Simplify this by passing in the block group itself and updating the space_info fields based on the block group fields directly. Signed-off-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: use btrfs_fs_closing for background bg workJosef Bacik
For both unused bg deletion and async balance work we'll happily run if the fs is closing. However I want to move these to their own worker thread, and they can be long running jobs, so add a check to see if we're closing and simply bail. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: rename btrfs_insert_file_extent() to btrfs_insert_hole_extent()Omar Sandoval
btrfs_insert_file_extent() is only ever used to insert holes, so rename it and remove the redundant parameters. Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: Omar Sandoval <osandov@osandov.com> Signed-off-by: Sweet Tea Dorminy <sweettea-kernel@dorminy.me> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: sysfs: use sysfs_streq for string matchingDavid Sterba
We have own string matching helper that duplicates what sysfs_streq does, with a slight difference that it skips initial whitespace. So far this is used for the drive allocation policy. The initial whitespace of written sysfs values should be rather discouraged and we should use a standard helper. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: scrub: try to fix super block errorsQu Wenruo
[BUG] The following script shows that, although scrub can detect super block errors, it never tries to fix it: mkfs.btrfs -f -d raid1 -m raid1 $dev1 $dev2 xfs_io -c "pwrite 67108864 4k" $dev2 mount $dev1 $mnt btrfs scrub start -B $dev2 btrfs scrub start -Br $dev2 umount $mnt The first scrub reports the super error correctly: scrub done for f3289218-abd3-41ac-a630-202f766c0859 Scrub started: Tue Aug 2 14:44:11 2022 Status: finished Duration: 0:00:00 Total to scrub: 1.26GiB Rate: 0.00B/s Error summary: super=1 Corrected: 0 Uncorrectable: 0 Unverified: 0 But the second read-only scrub still reports the same super error: Scrub started: Tue Aug 2 14:44:11 2022 Status: finished Duration: 0:00:00 Total to scrub: 1.26GiB Rate: 0.00B/s Error summary: super=1 Corrected: 0 Uncorrectable: 0 Unverified: 0 [CAUSE] The comments already shows that super block can be easily fixed by committing a transaction: /* * If we find an error in a super block, we just report it. * They will get written with the next transaction commit * anyway */ But the truth is, such assumption is not always true, and since scrub should try to repair every error it found (except for read-only scrub), we should really actively commit a transaction to fix this. [FIX] Just commit a transaction if we found any super block errors, after everything else is done. We cannot do this just after scrub_supers(), as btrfs_commit_transaction() will try to pause and wait for the running scrub, thus we can not call it with scrub_lock hold. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: scrub: properly report super block errors in system logQu Wenruo
[PROBLEM] Unlike data/metadata corruption, if scrub detected some error in the super block, the only error message is from the updated device status: BTRFS info (device dm-1): scrub: started on devid 2 BTRFS error (device dm-1): bdev /dev/mapper/test-scratch2 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 BTRFS info (device dm-1): scrub: finished on devid 2 with status: 0 This is not helpful at all. [CAUSE] Unlike data/metadata error reporting, there is no visible report in kernel dmesg to report supper block errors. In fact, return value of scrub_checksum_super() is intentionally skipped, thus scrub_handle_errored_block() will never be called for super blocks. [FIX] Make super block errors to output an error message, now the full dmesg would looks like this: BTRFS info (device dm-1): scrub: started on devid 2 BTRFS warning (device dm-1): super block error on device /dev/mapper/test-scratch2, physical 67108864 BTRFS error (device dm-1): bdev /dev/mapper/test-scratch2 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 BTRFS info (device dm-1): scrub: finished on devid 2 with status: 0 BTRFS info (device dm-1): scrub: started on devid 2 This fix involves: - Move the super_errors reporting to scrub_handle_errored_block() This allows the device status message to show after the super block error message. But now we no longer distinguish super block corruption and generation mismatch, now all counted as corruption. - Properly check the return value from scrub_checksum_super() - Add extra super block error reporting for scrub_print_warning(). Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: fix alignment of VMA for memory mapped files on THPAlexander Zhu
With CONFIG_READ_ONLY_THP_FOR_FS, the Linux kernel supports using THPs for read-only mmapped files, such as shared libraries. However, the kernel makes no attempt to actually align those mappings on 2MB boundaries, which makes it impossible to use those THPs most of the time. This issue applies to general file mapping THP as well as existing setups using CONFIG_READ_ONLY_THP_FOR_FS. This is easily fixed by using thp_get_unmapped_area for the unmapped_area function in btrfs, which is what ext2, ext4, fuse, and xfs all use. Initially btrfs had been left out in commit 8c07fc452ac0 ("btrfs: fix alignment of VMA for memory mapped files on THP") as btrfs does not support DAX. However, commit 1854bc6e2420 ("mm/readahead: Align file mappings for non-DAX") removed the DAX requirement. We should now be able to call thp_get_unmapped_area() for btrfs. The problem can be seen in /proc/PID/smaps where THPeligible is set to 0 on mappings to eligible shared object files as shown below. Before this patch: 7fc6a7e18000-7fc6a80cc000 r-xp 00000000 00:1e 199856 /usr/lib64/libcrypto.so.1.1.1k Size: 2768 kB THPeligible: 0 VmFlags: rd ex mr mw me With this patch the library is mapped at a 2MB aligned address: fbdfe200000-7fbdfe4b4000 r-xp 00000000 00:1e 199856 /usr/lib64/libcrypto.so.1.1.1k Size: 2768 kB THPeligible: 1 VmFlags: rd ex mr mw me This fixes the alignment of VMAs for any mmap of a file that has the rd and ex permissions and size >= 2MB. The VMA alignment and THPeligible field for anonymous memory is handled separately and is thus not effected by this change. CC: stable@vger.kernel.org # 5.18+ Signed-off-by: Alexander Zhu <alexlzhu@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: add lockdep annotations for the ordered extents wait eventIoannis Angelakopoulos
This wait event is very similar to the pending ordered wait event in the sense that it occurs in a different context than the condition signaling for the event. The signaling occurs in btrfs_remove_ordered_extent() while the wait event is implemented in btrfs_start_ordered_extent() in fs/btrfs/ordered-data.c However, in this case a thread must not acquire the lockdep map for the ordered extents wait event when the ordered extent is related to a free space inode. That is because lockdep creates dependencies between locks acquired both in execution paths related to normal inodes and paths related to free space inodes, thus leading to false positives. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Ioannis Angelakopoulos <iangelak@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: change the lockdep class of free space inode's invalidate_lockIoannis Angelakopoulos
Reinitialize the class of the lockdep map for struct inode's mapping->invalidate_lock in load_free_space_cache() function in fs/btrfs/free-space-cache.c. This will prevent lockdep from producing false positives related to execution paths that make use of free space inodes and paths that make use of normal inodes. Specifically, with this change lockdep will create separate lock dependencies that include the invalidate_lock, in the case that free space inodes are used and in the case that normal inodes are used. The lockdep class for this lock was first initialized in inode_init_always() in fs/inode.c. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Ioannis Angelakopoulos <iangelak@fb.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: add lockdep annotations for pending_ordered wait eventIoannis Angelakopoulos
In contrast to the num_writers and num_extwriters wait events, the condition for the pending ordered wait event is signaled in a different context from the wait event itself. The condition signaling occurs in btrfs_remove_ordered_extent() in fs/btrfs/ordered-data.c while the wait event is implemented in btrfs_commit_transaction() in fs/btrfs/transaction.c Thus the thread signaling the condition has to acquire the lockdep map as a reader at the start of btrfs_remove_ordered_extent() and release it after it has signaled the condition. In this case some dependencies might be left out due to the placement of the annotation, but it is better than no annotation at all. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Ioannis Angelakopoulos <iangelak@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: add lockdep annotations for transaction states wait eventsIoannis Angelakopoulos
Add lockdep annotations for the transaction states that have wait events; 1) TRANS_STATE_COMMIT_START 2) TRANS_STATE_UNBLOCKED 3) TRANS_STATE_SUPER_COMMITTED 4) TRANS_STATE_COMPLETED The new macros introduced here to annotate the transaction states wait events have the same effect as the generic lockdep annotation macros. With the exception of the lockdep annotation for TRANS_STATE_COMMIT_START the transaction thread has to acquire the lockdep maps for the transaction states as reader after the lockdep map for num_writers is released so that lockdep does not complain. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Ioannis Angelakopoulos <iangelak@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: add lockdep annotations for num_extwriters wait eventIoannis Angelakopoulos
Similarly to the num_writers wait event in fs/btrfs/transaction.c add a lockdep annotation for the num_extwriters wait event. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Ioannis Angelakopoulos <iangelak@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: add lockdep annotations for num_writers wait eventIoannis Angelakopoulos
Annotate the num_writers wait event in fs/btrfs/transaction.c with lockdep in order to catch deadlocks involving this wait event. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Ioannis Angelakopoulos <iangelak@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: add macros for annotating wait events with lockdepIoannis Angelakopoulos
Introduce four macros that are used to annotate wait events in btrfs code with lockdep; 1) the btrfs_lockdep_init_map 2) the btrfs_lockdep_acquire, 3) the btrfs_lockdep_release 4) the btrfs_might_wait_for_event macros. The btrfs_lockdep_init_map macro is used to initialize a lockdep map. The btrfs_lockdep_<acquire,release> macros are used by threads to take the lockdep map as readers (shared lock) and release it, respectively. The btrfs_might_wait_for_event macro is used by threads to take the lockdep map as writers (exclusive lock) and release it. In general, the lockdep annotation for wait events work as follows: The condition for a wait event can be modified and signaled at the same time by multiple threads. These threads hold the lockdep map as readers when they enter a context in which blocking would prevent signaling the condition. Frequently, this occurs when a thread violates a condition (lockdep map acquire), before restoring it and signaling it at a later point (lockdep map release). The threads that block on the wait event take the lockdep map as writers (exclusive lock). These threads have to block until all the threads that hold the lockdep map as readers signal the condition for the wait event and release the lockdep map. The lockdep annotation is used to warn about potential deadlock scenarios that involve the threads that modify and signal the wait event condition and threads that block on the wait event. A simple example is illustrated below: Without lockdep: TA TB cond = false lock(A) wait_event(w, cond) unlock(A) lock(A) cond = true signal(w) unlock(A) With lockdep: TA TB rwsem_acquire_read(lockdep_map) cond = false lock(A) rwsem_acquire(lockdep_map) rwsem_release(lockdep_map) wait_event(w, cond) unlock(A) lock(A) cond = true signal(w) unlock(A) rwsem_release(lockdep_map) In the second case, with the lockdep annotation, lockdep would warn about an ABBA deadlock, while the first case would just deadlock at some point. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Ioannis Angelakopoulos <iangelak@fb.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26btrfs: dump extra info if one free space cache has more bitmaps than it shouldQu Wenruo
There is an internal report on hitting the following ASSERT() in recalculate_thresholds(): ASSERT(ctl->total_bitmaps <= max_bitmaps); Above @max_bitmaps is calculated using the following variables: - bytes_per_bg 8 * 4096 * 4096 (128M) for x86_64/x86. - block_group->length The length of the block group. @max_bitmaps is the rounded up value of block_group->length / 128M. Normally one free space cache should not have more bitmaps than above value, but when it happens the ASSERT() can be triggered if CONFIG_BTRFS_ASSERT is also enabled. But the ASSERT() itself won't provide enough info to know which is going wrong. Is the bg too small thus it only allows one bitmap? Or is there something else wrong? So although I haven't found extra reports or crash dump to do further investigation, add the extra info to make it more helpful to debug. Reviewed-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-09-26drm/ttm: add dma_resv_assert_held() calls to vmap/vunmapChristian König
Let's make sure nobody is calling those functions without holding the appropriate locks. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220715111533.467012-2-christian.koenig@amd.com
2022-09-26wifi: ath11k: Fix deadlock during WoWLAN suspendBaochen Qiang
We are seeing system hangs during WoWLAN suspend, and get below two stacks: Stack1: [ffffb02cc1557b20] __schedule at ffffffff8bb10860 [ffffb02cc1557ba8] schedule at ffffffff8bb10f24 [ffffb02cc1557bb8] schedule_timeout at ffffffff8bb16d88 [ffffb02cc1557c30] wait_for_completion at ffffffff8bb11778 [ffffb02cc1557c78] __flush_work at ffffffff8b0b30cd [ffffb02cc1557cf0] __cancel_work_timer at ffffffff8b0b33ad [ffffb02cc1557d60] ath11k_mac_drain_tx at ffffffffc0c1f0ca [ath11k] [ffffb02cc1557d70] ath11k_wow_op_suspend at ffffffffc0c5201e [ath11k] [ffffb02cc1557da8] __ieee80211_suspend at ffffffffc11e2bd3 [mac80211] [ffffb02cc1557dd8] wiphy_suspend at ffffffffc0f901ac [cfg80211] [ffffb02cc1557e08] dpm_run_callback at ffffffff8b75118a [ffffb02cc1557e38] __device_suspend at ffffffff8b751630 [ffffb02cc1557e70] async_suspend at ffffffff8b7519ea [ffffb02cc1557e88] async_run_entry_fn at ffffffff8b0bf4ce [ffffb02cc1557ea8] process_one_work at ffffffff8b0b1a24 [ffffb02cc1557ee0] worker_thread at ffffffff8b0b1c4a [ffffb02cc1557f18] kthread at ffffffff8b0b9cb8 [ffffb02cc1557f50] ret_from_fork at ffffffff8b001d32 Stack2: [ffffb02cc00b7d18] __schedule at ffffffff8bb10860 [ffffb02cc00b7da0] schedule at ffffffff8bb10f24 [ffffb02cc00b7db0] schedule_preempt_disabled at ffffffff8bb112b4 [ffffb02cc00b7db8] __mutex_lock at ffffffff8bb127ea [ffffb02cc00b7e38] ath11k_mgmt_over_wmi_tx_work at ffffffffc0c1aa44 [ath11k] [ffffb02cc00b7ea8] process_one_work at ffffffff8b0b1a24 [ffffb02cc00b7ee0] worker_thread at ffffffff8b0b1c4a [ffffb02cc00b7f18] kthread at ffffffff8b0b9cb8 [ffffb02cc00b7f50] ret_from_fork at ffffffff8b001d32 From the first stack, ath11k_mac_drain_tx calls cancel_work_sync(&ar->wmi_mgmt_tx_work) and waits all packets to be sent out or dropped. However, we find from Stack2 that this work item is blocked because ar->conf_mutex is already held by ath11k_wow_op_suspend. Fix this issue by moving ath11k_mac_wait_tx_complete to the start of ath11k_wow_op_suspend where ar->conf_mutex has not been acquired. And this change also makes the logic in ath11k_wow_op_suspend match the logic in ath11k_mac_op_start and ath11k_mac_op_stop. Tested-on: WCN6855 hw2.0 PCI WLAN.HSP.1.1-03125-QCAHSPSWPL_V1_V2_SILICONZ_LITE-3 Signed-off-by: Baochen Qiang <quic_bqiang@quicinc.com> Signed-off-by: Kalle Valo <quic_kvalo@quicinc.com> Link: https://lore.kernel.org/r/20220919021435.2459-1-quic_bqiang@quicinc.com
2022-09-26wifi: ath11k: Remove redundant ath11k_mac_drain_txBaochen Qiang
ath11k_mac_drain_tx is already called in ath11k_mac_wait_tx_complete, no need to call it again. So remove it. This is found in code review. Tested-on: WCN6855 hw2.0 PCI WLAN.HSP.1.1-03125-QCAHSPSWPL_V1_V2_SILICONZ_LITE-3 Signed-off-by: Baochen Qiang <quic_bqiang@quicinc.com> Signed-off-by: Kalle Valo <quic_kvalo@quicinc.com> Link: https://lore.kernel.org/r/20220919020259.1746-1-quic_bqiang@quicinc.com
2022-09-26wifi: ath11k: Add spectral scan support for 160 MHzTamizh Chelvam Raja
There are two types of 160 MHz spectral scan support mentioned below 1. Fragmented approach 2. Single event approach In this fragmented approach, single 160 MHz will be split as two 80 MHz buffer. First fft sample buffer will contain spectral scan result of primary 80 MHz and the second fft sample buffer will contain secondary 80 MHz and here cfreq1 and cfreq2 will be mentioned. In case of 160 MHz on 36th channel will contain cfreq1 as 5210 and cfreq2 as 5290. Chipsets which support this approach are IPQ8074/IPQ6018. Replacing freq1 with freq2 in every secondary sepctral scan event to distinguish between two different 80 MHz spectral event data. In the 2nd approach each fft sample buffer will contain spectral scan result for whole 160 MHz by mentioning cfreq1 as 5250 which is center frequency of whole 160 MHz. Chipset which support this approach is QCN9074. Host will receive spectral event from target for every 5 fft samples. Tested-on: IPQ8074 hw2.0 AHB WLAN.HK.2.5.0.1-01120-QCAHKSWPL-1 Tested-on: QCN9074 hw1.0 PCI WLAN.HK.2.5.0.1-01120-QCAHKSWP Signed-off-by: Tamizh Chelvam Raja <quic_tamizhr@quicinc.com> Signed-off-by: Kalle Valo <quic_kvalo@quicinc.com> Link: https://lore.kernel.org/r/20220725055001.15194-1-quic_tamizhr@quicinc.com
2022-09-26wifi: ath11k: Add support to get power save duration for each clientVenkateswara Naralasetty
Add support to get the following power save information through debugfs interface, * Current ps state of the peer * Time duration since the peer is in power save * Total duration of the peer spent in power save Above information is helpful in debugging the issues with power save clients. This patch also add trace log support for PS timekeeper to track the PS state change of the peers alongs with the peer MAC address and timestamp. Use the below commands to get the above power save information, To know the time_since_station_in_power_save: cat /sys/kernel/debug/ieee80211/phyX/netdev:wlanX/stations/ XX:XX:XX:XX:XX:XX/current_ps_duration To know power_save_duration: cat /sys/kernel/debug/ieee80211/phyX/netdev:wlanX/stations/ XX:XX:XX:XX:XX:XX/total_ps_duration To reset the power_save_duration of all stations connected to AP: echo 1 > /sys/kernel/debug/ieee80211/phyX/ath11k/reset_ps_duration To enable/disable the ps_timekeeper: echo Y > /sys/kernel/debug/ieee80211/phyX/ath11k/ps_timekeeper_enable Y = 1 to enable and Y = 0 to disable. To record PS timekeeer logs after enabling ps_timekeeper: trace-cmd record -e ath11k_ps_timekeeper Tested-on: Tested-on: IPQ8074 WLAN.HK.2.5.0.1-00991-QCAHKSWPL_SILICONZ-1 Signed-off-by: Venkateswara Naralasetty <quic_vnaralas@quicinc.com> Signed-off-by: Tamizh Chelvam Raja <quic_tamizhr@quicinc.com> Signed-off-by: Kalle Valo <quic_kvalo@quicinc.com> Link: https://lore.kernel.org/r/20220725054601.14719-1-quic_tamizhr@quicinc.com
2022-09-26fanotify: Remove obsoleted fanotify_event_has_path()Gaosheng Cui
All uses of fanotify_event_has_path() have been removed since commit 9c61f3b560f5 ("fanotify: break up fanotify_alloc_event()"), now it is useless, so remove it. Link: https://lore.kernel.org/r/20220926023018.1505270-1-cuigaosheng1@huawei.com Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Signed-off-by: Jan Kara <jack@suse.cz>
2022-09-26cpufreq: qcom-cpufreq-hw: Add cpufreq qos for LMhXuewen Yan
Before update thermal pressure, the max cpufreq should be limited. Add QOS control for Lmh throttle cpufreq. Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2022-09-26gpio: mvebu: Fix check for pwm support on non-A8K platformsPali Rohár
pwm support incompatible with Armada 80x0/70x0 API is not only in Armada 370, but also in Armada XP, 38x and 39x. So basically every non-A8K platform. Fix check for pwm support appropriately. Fixes: 85b7d8abfec7 ("gpio: mvebu: add pwm support for Armada 8K/7K") Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Bartosz Golaszewski <brgl@bgdev.pl>
2022-09-26drm/ast: make ast_modeset staticruanjinjie
The symbol is not used outside of the file, so mark it static. Fixes the following warning: drivers/gpu/drm/ast/ast_drv.c:42:5: warning: symbol 'ast_modeset' was not declared. Should it be static? Signed-off-by: ruanjinjie <ruanjinjie@huawei.com> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Link: https://patchwork.freedesktop.org/patch/msgid/20220926023253.739699-1-ruanjinjie@huawei.com
2022-09-26ALSA: memalloc: use __GFP_RETRY_MAYFAIL for DMA mem allocsKai Vehmanen
Use __GFP_RETRY_MAYFAIL instead of __GFP__NORETRY in snd_dma_dev_alloc(), snd_dma_wc_alloc() and friends, to allocate pages for device memory. The MAYFAIL flag retains the semantics of not triggering the OOM killer, but lowers the risk of alloc failure. MAYFAIL flag was added in commit dcda9b04713c3 ("mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic"). This change addresses recurring failures with SOF audio driver in test cases where a system suspend-resume stress test is run, combined with an active high memory-load use-case. The failure typically shows up as: [ 379.480229] sof-audio-pci-intel-tgl 0000:00:1f.3: booting DSP firmware [ 379.484803] sof-audio-pci-intel-tgl 0000:00:1f.3: error: memory alloc failed: -12 [ 379.484810] sof-audio-pci-intel-tgl 0000:00:1f.3: error: dma prepare for ICCMAX stream failed Multiple fixes to reduce the memory usage of DSP boot have been identified in SOF driver, but even with those fixes, debug on affected systems has shown that even a single page alloc may fail with __GFP_NORETRY. When this occurs, system is under significant load on physical memory, but a lot of reclaimable pages are available, so the system has not run out of memory. With __GFP_RETRY_MAYFAIL, the errors are not hit in these stress tests. The alloc failure is severe as audio capability is completely lost if alloc failure is hit at system resume. An alternative solution was considered where the resources for DSP boot would be kept allocated until driver is unbound. This would avoid the allocation failure, but consume memory that is only needed temporarily at probe and resume time. It seems better to not hang on to the memory, but rather work a bit harder for allocating the pages at resume. BugLink: https://github.com/thesofproject/linux/issues/3844 Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com> Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Link: https://lore.kernel.org/r/20220923153501.3326041-1-kai.vehmanen@linux.intel.com Signed-off-by: Takashi Iwai <tiwai@suse.de>
2022-09-26ALSA: hda/hdmi: Limit the maximal count of PCM devices to 8Jaroslav Kysela
The current hardware has up to 4 converters. Save little space. The limit 8 is enough even for a more improved hardware. Signed-off-by: Jaroslav Kysela <perex@perex.cz> Link: https://lore.kernel.org/r/20220923082236.61024-1-perex@perex.cz Signed-off-by: Takashi Iwai <tiwai@suse.de>
2022-09-26cpufreq: Add __init annotation to module init funcsXiu Jianfeng
Add missing __init annotation to module init funcs. Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2022-09-26cpufreq: tegra194: change tegra239_cpufreq_soc to staticYang Yingliang
tegra239_cpufreq_soc is only used in tegra194-cpufreq.c now, change it to static. Fixes: 676886010707 ("cpufreq: tegra194: Add support for Tegra239") Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2022-09-26drm/exynos: Fix return type for mixer_mode_valid and hdmi_mode_validNathan Huckleberry
The field mode_valid in exynos_drm_crtc_ops is expected to be of type enum drm_mode_status (*mode_valid)(struct exynos_drm_crtc *crtc, const struct drm_display_mode *mode); Likewise for mode_valid in drm_connector_helper_funcs. The mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. The return type of mixer_mode_valid and hdmi_mode_valid should be changed from int to enum drm_mode_status. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://protect2.fireeye.com/v1/url?k=3e644738-5fef521d-3e65cc77- 74fe485cbff6-36ad29bf912d3c9f&q=1&e=5cc06174-77dd-4abd-ab50- 155da5711aa3&u=https%3A%2F%2Fgithub.com%2FClangBuiltLinux%2Flinux%2Fissues%2F 1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Signed-off-by: Inki Dae <inki.dae@samsung.com>
2022-09-26drm/exynos: replace drm_detect_hdmi_monitor() with drm_display_info.is_hdmihongao
Once EDID is parsed, the monitor HDMI support information is available through drm_display_info.is_hdmi. This driver calls drm_detect_hdmi_monitor() to receive the same information, which is less efficient. Avoid calling drm_detect_hdmi_monitor() and use drm_display_info.is_hdmi instead. Signed-off-by: hongao <hongao@uniontech.com> Signed-off-by: Inki Dae <inki.dae@samsung.com>
2022-09-25hwmon: (ina3221) Use DEFINE_RUNTIME_DEV_PM_OPS() and pm_ptr()Jonathan Cameron
These new macros allow the compiler to see all the functions even if !CONFIG_PM* and remove the structures and functions if unused. This removes the need for __maybe_unused markings. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Ninad Malwade <nmalwade@nvidia.com> Link: https://lore.kernel.org/r/20220925172759.3573439-19-jic23@kernel.org Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2022-09-25hwmon: (w83627ehf) Switch to DEFINE_SIMPLE_DEV_PM_OPS() and pm_sleep_ptr()Jonathan Cameron
These newer PM macros allow the compiler to see what code it can remove if !CONFIG_PM_SLEEP. This allows the removal of __maybe_unused markings whilst achieving the same result. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20220925172759.3573439-18-jic23@kernel.org Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2022-09-25hwmon: (tmp108) Switch to DEFINE_SIMPLE_DEV_PM_OPS() and pm_sleep_ptr()Jonathan Cameron
These newer PM macros allow the compiler to see what code it can remove if !CONFIG_PM_SLEEP. This allows the removal of __maybe_unused markings whilst achieving the same result. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20220925172759.3573439-17-jic23@kernel.org Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2022-09-25hwmon: (tmp103) Switch to DEFINE_SIMPLE_DEV_PM_OPS() and pm_sleep_ptr()Jonathan Cameron
These newer PM macros allow the compiler to see what code it can remove if !CONFIG_PM_SLEEP. This allows the removal of __maybe_unused markings whilst achieving the same result. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20220925172759.3573439-16-jic23@kernel.org Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2022-09-25hwmon: (tmp102) Switch to DEFINE_SIMPLE_DEV_PM_OPS() and pm_sleep_ptr()Jonathan Cameron
These newer PM macros allow the compiler to see what code it can remove if !CONFIG_PM_SLEEP. This allows the removal of #ifdef guards whilst achieving the same result. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20220925172759.3573439-15-jic23@kernel.org Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2022-09-25hwmon: (pwm-fan) Switch to DEFINE_SIMPLE_DEV_PM_OPS() and pm_sleep_ptr()Jonathan Cameron
These newer PM macros allow the compiler to see what code it can remove if !CONFIG_PM_SLEEP. This allows the removal of #ifdef guards whilst achieving the same result. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20220925172759.3573439-14-jic23@kernel.org Signed-off-by: Guenter Roeck <linux@roeck-us.net>