diff options
author | Qu Wenruo <wqu@suse.com> | 2024-09-10 16:12:57 +0930 |
---|---|---|
committer | David Sterba <dsterba@suse.com> | 2024-11-11 14:34:12 +0100 |
commit | a4ef54dbb576032ba31a646a5ffc8a26a83cb92c (patch) | |
tree | 609ff8ee2586d94d29eadf8a19e99cddfb314f7f /fs/btrfs/inode.c | |
parent | a8706d0271a8ef7d8d0916d78ab4821e9ccb7464 (diff) |
btrfs: make extent_range_clear_dirty_for_io() to handle sector size < page size cases
For btrfs with sector size < page size (e.g. 4K sector size, 64K page
size), and enable the sector perfect compression support, then the
following dirty range can lead to problems:
0 32K 64K 96K 128K
| |///////||//////| |/|
124K
In above case, if we start writeback for that inode, the last dirty
range [124K, 128K) will not be submitted and cause reserved space
leakage:
- Start writeback for page 0
We find the range [32K, 96K) is suitable for compression, and queue it
into a workqueue to do the delayed compression and submission.
- Compression happens for range [32K, 96K)
Function extent_range_clear_dirty_for_io() is called, however it is
only doing full page handling, not considering any the extra bitmaps
for subpage cases.
That function will clear page dirty for both page 0 and page 64K.
- Writeback for the inode is done
Because page 64K has its dirty flag cleared, it will not be considered
as a writeback target.
This means the range [124K, 128K) will not be submitted, and reserved
space for it will be leaked.
Fix this problem by using the subpage helper to clear the dirty flag.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'fs/btrfs/inode.c')
-rw-r--r-- | fs/btrfs/inode.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 1e4ca1e7d2e5..686d39309410 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -902,7 +902,8 @@ static int extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 e ret = PTR_ERR(folio); continue; } - folio_clear_dirty_for_io(folio); + btrfs_folio_clamp_clear_dirty(inode_to_fs_info(inode), folio, start, + end + 1 - start); folio_put(folio); } return ret; |