diff options
author | Baokun Li <libaokun1@huawei.com> | 2025-07-14 21:03:21 +0800 |
---|---|---|
committer | Theodore Ts'o <tytso@mit.edu> | 2025-07-25 09:14:17 -0400 |
commit | 7d345aa1fac4c2ec9584fbd6f389f2c2368671d5 (patch) | |
tree | d046c1bb8a6e1397316e06115f2d798fdd4ec2bc /rust/helpers/blk.c | |
parent | 1c320d8e92925bb7615f83a7b6e3f402a5c2ca63 (diff) |
ext4: fix largest free orders lists corruption on mb_optimize_scan switch
The grp->bb_largest_free_order is updated regardless of whether
mb_optimize_scan is enabled. This can lead to inconsistencies between
grp->bb_largest_free_order and the actual s_mb_largest_free_orders list
index when mb_optimize_scan is repeatedly enabled and disabled via remount.
For example, if mb_optimize_scan is initially enabled, largest free
order is 3, and the group is in s_mb_largest_free_orders[3]. Then,
mb_optimize_scan is disabled via remount, block allocations occur,
updating largest free order to 2. Finally, mb_optimize_scan is re-enabled
via remount, more block allocations update largest free order to 1.
At this point, the group would be removed from s_mb_largest_free_orders[3]
under the protection of s_mb_largest_free_orders_locks[2]. This lock
mismatch can lead to list corruption.
To fix this, whenever grp->bb_largest_free_order changes, we now always
attempt to remove the group from its old order list. However, we only
insert the group into the new order list if `mb_optimize_scan` is enabled.
This approach helps prevent lock inconsistencies and ensures the data in
the order lists remains reliable.
Fixes: 196e402adf2e ("ext4: improve cr 0 / cr 1 group scanning")
CC: stable@vger.kernel.org
Suggested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Baokun Li <libaokun1@huawei.com>
Reviewed-by: Zhang Yi <yi.zhang@huawei.com>
Link: https://patch.msgid.link/20250714130327.1830534-12-libaokun1@huawei.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Diffstat (limited to 'rust/helpers/blk.c')
0 files changed, 0 insertions, 0 deletions