diff options
author | Qu Wenruo <wqu@suse.com> | 2025-07-15 13:18:39 +0930 |
---|---|---|
committer | David Sterba <dsterba@suse.com> | 2025-07-22 01:13:03 +0200 |
commit | 4e346baee95f4688b36c9bd95664774c3d105c3d (patch) | |
tree | 354c4b724781ca46589bf53275bbab6830592828 /lib/test_stackinit.c | |
parent | 009b2056cb259c90426b3c57e5b145d1cd9fa9e2 (diff) |
btrfs: reloc: unconditionally invalidate the page cache for each cluster
Commit 9d9ea1e68a05 ("btrfs: subpage: fix relocation potentially
overwriting last page data") fixed a bug when relocating data block
groups for subpage cases.
However for the incoming large folios for data reloc inode, we can hit
the same situation where block size is the same as page size, but the
folio we got is still larger than a block.
In that case, the old subpage specific check is no longer reliable.
Here we have to enhance the handling by:
- Unconditionally invalidate the page cache for the current cluster
We set the @flush to true so that any dirty folios are properly
written back first.
And this time instead of dropping the whole page cache, just drop the
range covered by the current cluster.
This will bring some minor performance drop, as for a large folio, the
heading half will be read twice (read by previous cluster, then
invalidated, then read again by the current cluster).
However that is required to support large folios, and this gets rid of
the kinda tricky manual uptodate flag clearing for each block.
- Remove the special handling of writing back the whole page cache
filemap_invalidate_inode() handles the write back already, and since
we're invalidating all pages in the range, we no longer need to
manually clear the uptodate flags for involved blocks.
Thus there is no need to manually write back the whole page cache.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'lib/test_stackinit.c')
0 files changed, 0 insertions, 0 deletions