summaryrefslogtreecommitdiff
path: root/fs/gfs2
AgeCommit message (Collapse)Author
2021-01-19gfs2: Rename gfs2_{write => flush}_revokesAndreas Gruenbacher
Function gfs2_write_revokes doesn't actually write any revokes; instead, it adds revokes to the system transaction during a flush. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-19gfs2: Minor debugging improvementAndreas Gruenbacher
Split the assert in gfs2_trans_end into two parts. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-19gfs2: Some documentation updatesAndreas Gruenbacher
The calc_reserved description claims that buf_limit is 502 (on 4k filesystems), but it is actually 503. Fix / clarify the entire description. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-19gfs2: Minor gfs2_write_revokes cleanupsAndreas Gruenbacher
Clean up the computations in gfs2_write_revokes (no change in functionality). Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-19gfs2: Simplify the buf_limit and databuf_limit definitionsAndreas Gruenbacher
The BUF_OFFSET and DATABUF_OFFSET definitions are only used in buf_limit and databuf_limit, respectively, and the rounding done in those definitions is immediately wiped out by dividing by the element size. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-19gfs2: Un-obfuscate function jdesc_find_iAndreas Gruenbacher
Clean up this function to show that it is trivial. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-18gfs2: Set GBF_FULL flags when reading resource groupAndreas Gruenbacher
When reading a resource group from disk or when receiving the resource group statistics from a Lock Value Block (LVB), set/clear the GBF_FULL flags of all bitmaps in that resource group according to whether or not the resource group is full. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-18gfs2: Don't clear GBF_FULL flags in rs_deltreeAndreas Gruenbacher
Removing a reservation doesn't make any actual space available, so don't clear the GBF_FULL flags in that case. Otherwise, we'll only spend more time scanning the bitmaps unnecessarily. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-18Revert "gfs2: Don't reject a supposedly full bitmap if we have blocks reserved"Andreas Gruenbacher
This reverts commit e79e0e1428188b24c3b57309ffa54a33c4ae40c4. It turns out that we're only setting the GBF_FULL flag of a bitmap if we've been scanning from the beginning of the bitmap until the end and we haven't found a single free block, and we're not skipping reservations in that process, either. This means that in gfs2_rbm_find, we can always skip bitmaps with the GBF_FULL flag set. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-18gfs2: Minor gfs2_inplace_reserve cleanupAndreas Gruenbacher
Clean up the reservation size computation logic in gfs2_inplace_reserve a little. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-18gfs2: Get rid of unnecessary variable in gfs2_alloc_blocksAndreas Gruenbacher
Variable ndata is only used inside "if (!dinode)", so it can be replaced entirely with *nblocks. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-18gfs2: Only use struct gfs2_rbm for bitmap manipulationsAndreas Gruenbacher
GFS2 uses struct gfs2_rbm to represent a filesystem block number as a bit position within a resource group. This representation is used in the bitmap manipulation code to prevent excessive conversions between block numbers and bit positions, but also in struct gfs2_blkreserv which is part of struct gfs2_inode, to mark the start of a reservation. In the inode, the bit position representation makes less sense: first, the start position is used as a block number about as often as a bit position; second, the bit position representation makes the code unnecessarily complicated and difficult to read. Therefore, change struct gfs2_blkreserv to represent the start of a reservation as a block number instead of a bit position. (This requires keeping track of the resource group in gfs2_blkreserv separately.) With that change, various things can be slightly simplified, and struct gfs2_rbm can be moved to rgrp.c. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-18gfs2: Turn gfs2_rbm_incr into gfs2_rbm_addAndreas Gruenbacher
Change gfs2_rbm_incr to advance an rbm by a given number of blocks. Use that in gfs2_reservation_check_and_update to save a gfs2_rbm_to_block -> gfs2_rbm_from_block round trip. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2021-01-13gfs2: don't worry about I_DIRTY_TIME in gfs2_fsync()Eric Biggers
The I_DIRTY_TIME flag is primary used within the VFS, and there's no reason for ->fsync() implementations to do anything with it. This is because when !datasync, the VFS will expire dirty timestamps before calling ->fsync(). (See vfs_fsync_range().) This turns I_DIRTY_TIME into I_DIRTY_SYNC. Therefore, change gfs2_fsync() to not check for I_DIRTY_TIME. Link: https://lore.kernel.org/r/20210112190253.64307-11-ebiggers@kernel.org Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jan Kara <jack@suse.cz>
2021-01-13fs: don't call ->dirty_inode for lazytime timestamp updatesEric Biggers
There is no need to call ->dirty_inode for lazytime timestamp updates (i.e. for __mark_inode_dirty(I_DIRTY_TIME)), since by the definition of lazytime, filesystems must ignore these updates. Filesystems only need to care about the updated timestamps when they expire. Therefore, only call ->dirty_inode when I_DIRTY_INODE is set. Based on a patch from Christoph Hellwig: https://lore.kernel.org/r/20200325122825.1086872-4-hch@lst.de Link: https://lore.kernel.org/r/20210112190253.64307-6-ebiggers@kernel.org Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jan Kara <jack@suse.cz> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jan Kara <jack@suse.cz>
2020-12-31gfs2: make gfs2_log_write_page staticBob Peterson
Function gfs2_log_write_page is only used in lops.c, so make it static. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-12-23gfs2: move freeze glock outside the make_fs_rw and _ro functionsBob Peterson
Before this patch, sister functions gfs2_make_fs_rw and gfs2_make_fs_ro locked (held) the freeze glock by calling gfs2_freeze_lock and gfs2_freeze_unlock. The problem is, not all the callers of gfs2_make_fs_ro should be doing this. The three callers of gfs2_make_fs_ro are: remount (gfs2_reconfigure), signal_our_withdraw, and unmount (gfs2_put_super). But when unmounting the file system we can get into the following circular lock dependency: deactivate_super down_write(&s->s_umount); <-------------------------------------- s_umount deactivate_locked_super gfs2_kill_sb kill_block_super generic_shutdown_super gfs2_put_super gfs2_make_fs_ro gfs2_glock_nq_init sd_freeze_gl freeze_go_sync if (freeze glock in SH) freeze_super (vfs) down_write(&sb->s_umount); <------- s_umount This patch moves the hold of the freeze glock outside the two sister rw/ro functions to their callers, but it doesn't request the glock from gfs2_put_super, thus eliminating the circular dependency. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-12-23gfs2: Add common helper for holding and releasing the freeze glockBob Peterson
Many places in the gfs2 code queued and dequeued the freeze glock. Almost all of them acquire it in SHARED mode, and need to specify the same LM_FLAG_NOEXP and GL_EXACT flags. This patch adds common helper functions gfs2_freeze_lock and gfs2_freeze_unlock to make the code more readable, and to prepare for the next patch. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-12-03gfs2: in signal_our_withdraw wait for unfreeze of _this_ fs onlyBob Peterson
Function signal_our_withdraw needs to work on file systems that have been partially frozen. To do this, it called flush_workqueue(gfs2_freeze_wq). This this wrong because it waits for *ALL* file systems to be unfrozen, not just the one we're withdrawing from. It should only wait for the targetted file system to be unfrozen. Otherwise it would wait until ALL file systems are thawed before signaling the withdraw. This patch changes signal_our_withdraw so it calls flush_work() for the target file system's freeze work (only) to be completed. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-12-03gfs2: Remove sb_start_write from gfs2_statfs_syncBob Peterson
Before this patch, function gfs2_statfs_sync called sb_start_write and sb_end_write. This is completely unnecessary because, aside from grabbing glocks, gfs2_statfs_sync does all its updates to statfs with a transaction: gfs2_trans_begin and _end. And transactions always do sb_start_intwrite in gfs2_trans_begin and sb_end_intwrite in gfs2_trans_end. This patch simply removes the call to sb_start_write. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-12-01gfs2: remove trailing semicolons from macro definitionsTom Rix
The macro use will already have a semicolon. Signed-off-by: Tom Rix <trix@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-12-01Revert "GFS2: Prevent delete work from occurring on glocks used for create"Andreas Gruenbacher
Since commit a0e3cc65fa29 ("gfs2: Turn gl_delete into a delayed work"), we're cancelling any pending delete work of an iopen glock before attaching a new inode to that glock in gfs2_create_inode. This means that delete_work_func can no longer be queued or running when attaching the iopen glock to the new inode, and we can revert commit a4923865ea07 ("GFS2: Prevent delete work from occurring on glocks used for create"), which tried to achieve the same but in a racy way. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-12-01gfs2: Make inode operations staticAndreas Gruenbacher
The inode operations are not used outside inode.c. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-12-01gfs2: Fix deadlock between gfs2_{create_inode,inode_lookup} and delete_work_funcAndreas Gruenbacher
In gfs2_create_inode and gfs2_inode_lookup, make sure to cancel any pending delete work before taking the inode glock. Otherwise, gfs2_cancel_delete_work may block waiting for delete_work_func to complete, and delete_work_func may block trying to acquire the inode glock in gfs2_inode_lookup. Reported-by: Alexander Aring <aahringo@redhat.com> Fixes: a0e3cc65fa29 ("gfs2: Turn gl_delete into a delayed work") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-26gfs2: Upgrade shared glocks for atime updatesAndreas Gruenbacher
Commit 20f829999c38 ("gfs2: Rework read and page fault locking") lifted the glock lock taking from the low-level ->readpage and ->readahead address space operations to the higher-level ->read_iter file and ->fault vm operations. The glocks are still taken in LM_ST_SHARED mode only. On filesystems mounted without the noatime option, ->read_iter sometimes needs to update the atime as well, though. Right now, this leads to a failed locking mode assertion in gfs2_dirty_inode. Fix that by introducing a new update_time inode operation. There, if the glock is held non-exclusively, upgrade it to an exclusive lock. Reported-by: Alexander Aring <aahringo@redhat.com> Fixes: 20f829999c38 ("gfs2: Rework read and page fault locking") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-25gfs2: Don't freeze the file system during unmountBob Peterson
GFS2's freeze/thaw mechanism uses a special freeze glock to control its operation. It does this with a sync glock operation (glops.c) called freeze_go_sync. When the freeze glock is demoted (glock's do_xmote) the glops function causes the file system to be frozen. This is intended. However, GFS2's mount and unmount processes also hold the freeze glock to prevent other processes, perhaps on different cluster nodes, from mounting the frozen file system in read-write mode. Before this patch, there was no check in freeze_go_sync for whether a freeze in intended or whether the glock demote was caused by a normal unmount. So it was trying to freeze the file system it's trying to unmount, which ends up in a deadlock. This patch adds an additional check to freeze_go_sync so that demotes of the freeze glock are ignored if they come from the unmount process. Fixes: 20b329129009 ("gfs2: Fix regression in freeze_go_sync") Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-25gfs2: check for empty rgrp tree in gfs2_ri_updateBob Peterson
If gfs2 tries to mount a (corrupt) file system that has no resource groups it still tries to set preferences on the first one, which causes a kernel null pointer dereference. This patch adds a check to function gfs2_ri_update so this condition is detected and reported back as an error. Reported-by: syzbot+e3f23ce40269a4c9053a@syzkaller.appspotmail.com Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-24gfs2: set lockdep subclass for iopen glocksAlexander Aring
This patch introduce a new globs attribute to define the subclass of the glock lockref spinlock. This avoid the following lockdep warning, which occurs when we lock an inode lock while an iopen lock is held: ============================================ WARNING: possible recursive locking detected 5.10.0-rc3+ #4990 Not tainted -------------------------------------------- kworker/0:1/12 is trying to acquire lock: ffff9067d45672d8 (&gl->gl_lockref.lock){+.+.}-{3:3}, at: lockref_get+0x9/0x20 but task is already holding lock: ffff9067da308588 (&gl->gl_lockref.lock){+.+.}-{3:3}, at: delete_work_func+0x164/0x260 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&gl->gl_lockref.lock); lock(&gl->gl_lockref.lock); *** DEADLOCK *** May be due to missing lock nesting notation 3 locks held by kworker/0:1/12: #0: ffff9067c1bfdd38 ((wq_completion)delete_workqueue){+.+.}-{0:0}, at: process_one_work+0x1b7/0x540 #1: ffffac594006be70 ((work_completion)(&(&gl->gl_delete)->work)){+.+.}-{0:0}, at: process_one_work+0x1b7/0x540 #2: ffff9067da308588 (&gl->gl_lockref.lock){+.+.}-{3:3}, at: delete_work_func+0x164/0x260 stack backtrace: CPU: 0 PID: 12 Comm: kworker/0:1 Not tainted 5.10.0-rc3+ #4990 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014 Workqueue: delete_workqueue delete_work_func Call Trace: dump_stack+0x8b/0xb0 __lock_acquire.cold+0x19e/0x2e3 lock_acquire+0x150/0x410 ? lockref_get+0x9/0x20 _raw_spin_lock+0x27/0x40 ? lockref_get+0x9/0x20 lockref_get+0x9/0x20 delete_work_func+0x188/0x260 process_one_work+0x237/0x540 worker_thread+0x4d/0x3b0 ? process_one_work+0x540/0x540 kthread+0x127/0x140 ? __kthread_bind_mask+0x60/0x60 ret_from_fork+0x22/0x30 Suggested-by: Andreas Gruenbacher <agruenba@redhat.com> Signed-off-by: Alexander Aring <aahringo@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-24gfs2: Fix deadlock dumping resource group glocksAlexander Aring
Commit 0e539ca1bbbe ("gfs2: Fix NULL pointer dereference in gfs2_rgrp_dump") introduced additional locking in gfs2_rgrp_go_dump, which is also used for dumping resource group glocks via debugfs. However, on that code path, the glock spin lock is already taken in dump_glock, and taking it again in gfs2_glock2rgrp leads to deadlock. This can be reproduced with: $ mkfs.gfs2 -O -p lock_nolock /dev/FOO $ mount /dev/FOO /mnt/foo $ touch /mnt/foo/bar $ cat /sys/kernel/debug/gfs2/FOO/glocks Fix that by not taking the glock spin lock inside the go_dump callback. Fixes: 0e539ca1bbbe ("gfs2: Fix NULL pointer dereference in gfs2_rgrp_dump") Signed-off-by: Alexander Aring <aahringo@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-18gfs2: Fix regression in freeze_go_syncBob Peterson
Patch 541656d3a513 ("gfs2: freeze should work on read-only mounts") changed the check for glock state in function freeze_go_sync() from "gl->gl_state == LM_ST_SHARED" to "gl->gl_req == LM_ST_EXCLUSIVE". That's wrong and it regressed gfs2's freeze/thaw mechanism because it caused only the freezing node (which requests the glock in EX) to queue freeze work. All nodes go through this go_sync code path during the freeze to drop their SHared hold on the freeze glock, allowing the freezing node to acquire it in EXclusive mode. But all the nodes must freeze access to the file system locally, so they ALL must queue freeze work. The freeze_work calls freeze_func, which makes a request to reacquire the freeze glock in SH, effectively blocking until the thaw from the EX holder. Once thawed, the freezing node drops its EX hold on the freeze glock, then the (blocked) freeze_func reacquires the freeze glock in SH again (on all nodes, including the freezer) so all nodes go back to a thawed state. This patch changes the check back to gl_state == LM_ST_SHARED like it was prior to 541656d3a513. Fixes: 541656d3a513 ("gfs2: freeze should work on read-only mounts") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-12gfs2: Fix case in which ail writes are done to jdata holesBob Peterson
Patch b2a846dbef4e ("gfs2: Ignore journal log writes for jdata holes") tried (unsuccessfully) to fix a case in which writes were done to jdata blocks, the blocks are sent to the ail list, then a punch_hole or truncate operation caused the blocks to be freed. In other words, the ail items are for jdata holes. Before b2a846dbef4e, the jdata hole caused function gfs2_block_map to return -EIO, which was eventually interpreted as an IO error to the journal, and then withdraw. This patch changes function gfs2_get_block_noalloc, which is only used for jdata writes, so it returns -ENODATA rather than -EIO, and when -ENODATA is returned to gfs2_ail1_start_one, the error is ignored. We can safely ignore it because gfs2_ail1_start_one is only called when the jdata pages have already been written and truncated, so the ail1 content no longer applies. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-12Revert "gfs2: Ignore journal log writes for jdata holes"Bob Peterson
This reverts commit b2a846dbef4ef54ef032f0f5ee188c609a0278a7. That commit changed the behavior of function gfs2_block_map to return -ENODATA in cases where a hole (IOMAP_HOLE) is encountered and create is false. While that fixed the intended problem for jdata, it also broke other callers of gfs2_block_map such as some jdata block reads. Before the patch, an encountered hole would be skipped and the buffer seen as unmapped by the caller. The patch changed the behavior to return -ENODATA, which is interpreted as an error by the caller. The -ENODATA return code should be restricted to the specific case where jdata holes are encountered during ail1 writes. That will be done in a later patch. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-12gfs2: fix possible reference leak in gfs2_check_blk_typeZhang Qilong
In the fail path of gfs2_check_blk_type, forgetting to call gfs2_glock_dq_uninit will result in rgd_gh reference leak. Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-03gfs2: Wake up when sd_glock_disposal becomes zeroAlexander Aring
Commit fc0e38dae645 ("GFS2: Fix glock deallocation race") fixed a sd_glock_disposal accounting bug by adding a missing atomic_dec statement, but it failed to wake up sd_glock_wait when that decrement causes sd_glock_disposal to reach zero. As a consequence, gfs2_gl_hash_clear can now run into a 10-minute timeout instead of being woken up. Add the missing wakeup. Fixes: fc0e38dae645 ("GFS2: Fix glock deallocation race") Cc: stable@vger.kernel.org # v2.6.39+ Signed-off-by: Alexander Aring <aahringo@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-11-02gfs2: Don't call cancel_delayed_work_sync from within delete work functionAndreas Gruenbacher
Right now, we can end up calling cancel_delayed_work_sync from within delete_work_func via gfs2_lookup_by_inum -> gfs2_inode_lookup -> gfs2_cancel_delete_work. When that happens, it will result in a deadlock. Instead, gfs2_inode_lookup should skip the call to gfs2_cancel_delete_work when called from delete_work_func (blktype == GFS2_BLKST_UNLINKED). Reported-by: Alexander Ahring Oder Aring <aahringo@redhat.com> Fixes: a0e3cc65fa29 ("gfs2: Turn gl_delete into a delayed work") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-29gfs2: check for live vs. read-only file system in gfs2_fitrimBob Peterson
Before this patch, gfs2_fitrim was not properly checking for a "live" file system. If the file system had something to trim and the file system was read-only (or spectator) it would start the trim, but when it starts the transaction, gfs2_trans_begin returns -EROFS (read-only file system) and it errors out. However, if the file system was already trimmed so there's no work to do, it never called gfs2_trans_begin. That code is bypassed so it never returns the error. Instead, it returns a good return code with 0 work. All this makes for inconsistent behavior: The same fstrim command can return -EROFS in one case and 0 in another. This tripped up xfstests generic/537 which reports the error as: +fstrim with unrecovered metadata just ate your filesystem This patch adds a check for a "live" (iow, active journal, iow, RW) file system, and if not, returns the error properly. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-29gfs2: don't initialize statfs_change inodes in spectator modeBob Peterson
Before commit 97fd734ba17e, the local statfs_changeX inode was never initialized for spectator mounts. However, it still checks for spectator mounts when unmounting everything. There's no good reason to lookup the statfs_changeX files because spectators cannot perform recovery. It still, however, needs the master statfs file for statfs calls. This patch adds the check for spectator mounts to init_statfs. Fixes: 97fd734ba17e ("gfs2: lookup local statfs inodes prior to journal recovery") Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-29gfs2: Split up gfs2_meta_sync into inode and rgrp versionsBob Peterson
Before this patch, function gfs2_meta_sync called filemap_fdatawrite to write the address space for the metadata being synced. That's great for inodes, but resource groups all point to the same superblock-address space, sdp->sd_aspace. Each rgrp has its own range of blocks on which it should operate. That meant every time an rgrp's metadata was synced, it would write all of them instead of just the range. This patch eliminates function gfs2_meta_sync and tailors specific metasync functions for inodes and rgrps. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-29gfs2: init_journal's undo directive should also undo the statfs inodesBob Peterson
Hi, Before this patch, function init_journal's "undo" directive jumped to label fail_jinode_gh. But now that it does statfs initialization, it needs to jump to fail_statfs instead. Failure to do so means that mount failures after init_journal is successful will neglect to let go of the proper statfs information, stranding the statfs_changeX inodes. This makes it impossible to free its glocks, and results in: gfs2: fsid=sda.s: G: s:EX n:2/805f f:Dqob t:EX d:UN/603701000 a:0 v:0 r:4 m:200 p:1 gfs2: fsid=sda.s: H: s:EX f:H e:0 p:1397947 [(ended)] init_journal+0x548/0x890 [gfs2] gfs2: fsid=sda.s: I: n:6/32863 t:8 f:0x00 d:0x00000201 s:24 p:0 gfs2: fsid=sda.s: G: s:SH n:5/805f f:Dqob t:SH d:UN/603712000 a:0 v:0 r:3 m:200 p:0 gfs2: fsid=sda.s: H: s:SH f:EH e:0 p:1397947 [(ended)] gfs2_inode_lookup+0x1fb/0x410 [gfs2] VFS: Busy inodes after unmount of sda. Self-destruct in 5 seconds. Have a nice day... The next time the file system is mounted, it then reuses the same glocks, which ends in a kernel NULL pointer dereference when trying to dump the reused glock. This patch makes the "undo" function of init_journal jump to fail_statfs so the statfs files are properly deconstructed upon failure. Fixes: 97fd734ba17e ("gfs2: lookup local statfs inodes prior to journal recovery") Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-29gfs2: Add missing truncate_inode_pages_final for sd_aspaceBob Peterson
Gfs2 creates an address space for its rgrps called sd_aspace, but it never called truncate_inode_pages_final on it. This confused vfs greatly which tried to reference the address space after gfs2 had freed the superblock that contained it. This patch adds a call to truncate_inode_pages_final for sd_aspace, thus avoiding the use-after-free. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-29gfs2: Free rd_bits later in gfs2_clear_rgrpd to fix use-after-freeBob Peterson
Function gfs2_clear_rgrpd calls kfree(rgd->rd_bits) before calling return_all_reservations, but return_all_reservations still dereferences rgd->rd_bits in __rs_deltree. Fix that by moving the call to kfree below the call to return_all_reservations. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-23gfs2: Recover statfs info in journal headAbhi Das
Apply the outstanding statfs changes in the journal head to the master statfs file. Zero out the local statfs file for good measure. Previously, statfs updates would be read in from the local statfs inode and synced to the master statfs inode during recovery. We now use the statfs updates in the journal head to update the master statfs inode instead of reading in from the local statfs inode. To preserve backward compatibility with kernels that can't do this, we still need to keep the local statfs inode up to date by writing changes to it. At some point in the future, we can do away with the local statfs inodes altogether and keep the statfs changes solely in the journal. Signed-off-by: Abhi Das <adas@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-23gfs2: lookup local statfs inodes prior to journal recoveryAbhi Das
We need to lookup the master statfs inode and the local statfs inodes earlier in the mount process (in init_journal) so journal recovery can use them when it attempts to recover the statfs info. We lookup all the local statfs inodes and store them in a linked list to allow a node to recover statfs info for other nodes in the cluster. Signed-off-by: Abhi Das <adas@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-20gfs2: Add fields for statfs info in struct gfs2_log_header_hostAbhi Das
And read these in __get_log_header() from the log header. Also make gfs2_statfs_change_out() non-static so it can be used outside of super.c Signed-off-by: Abhi Das <adas@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-20gfs2: Ignore subsequent errors after withdraw in rgrp_go_syncAndreas Gruenbacher
Once a withdraw has occurred, ignore errors that are the consequence of the withdraw. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-20gfs2: Eliminate gl_vmBob Peterson
The gfs2_glock structure has a gl_vm member, introduced in commit 7005c3e4ae428 ("GFS2: Use range based functions for rgrp sync/invalidation"), which stores the location of resource groups within their address space. This structure is in a union with iopen glock specific fields. It was introduced because at unmount time, the resource group objects were destroyed before flushing out any pending resource group glock work, and flushing out such work could require flushing / truncating the address space. Since commit b3422cacdd7e6 ("gfs2: Rework how rgrp buffer_heads are managed"), any pending resource group glock work is flushed out before destroying the resource group objects. So the resource group objects will now always exist in rgrp_go_sync and rgrp_go_inval, and we now simply compute the gl_vm values where needed instead of caching them. This also eliminates the union. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-20gfs2: Only access gl_delete for iopen glocksBob Peterson
Only initialize gl_delete for iopen glocks, but more importantly, only access it for iopen glocks in flush_delete_work: flush_delete_work is called for different types of glocks including rgrp glocks, and those use gl_vm which is in a union with gl_delete. Without this fix, we'll end up clobbering gl_vm, which results in general memory corruption. Fixes: a0e3cc65fa29 ("gfs2: Turn gl_delete into a delayed work") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-20gfs2: Fix comments to glock_hash_walkBob Peterson
The comments before function glock_hash_walk had the wrong name and an extra parameter. This simply fixes the comments. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-15gfs2: eliminate GLF_QUEUED flag in favor of list_empty(gl_holders)Bob Peterson
Before this patch, glock.c maintained a flag, GLF_QUEUED, which indicated when a glock had a holder queued. It was only checked for inode glocks, although set and cleared by all glocks, and it was only used to determine whether the glock should be held for the minimum hold time before releasing. The problem is that the flag is not accurate at all. If a process holds the glock, the flag is set. When they dequeue the glock, it only cleared the flag in cases when the state actually changed. So if the state doesn't change, the flag may still be set, even when nothing is queued. This happens to iopen glocks often: the get held in SH, then the file is closed, but the glock remains in SH mode. We don't need a special flag to indicate this: we can simply tell whether the glock has any items queued to the holders queue. It's a waste of cpu time to maintain it. This patch eliminates the flag in favor of simply checking list_empty on the glock holders. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
2020-10-15gfs2: Ignore journal log writes for jdata holesBob Peterson
When flushing out its ail1 list, gfs2_write_jdata_page calls function __block_write_full_page passing in function gfs2_get_block_noalloc. But there was a problem when a process wrote to a jdata file, then truncated it or punched a hole, leaving references to the blocks within the new hole in its ail list, which are to be written to the journal log. In writing them to the journal, after calling gfs2_block_map, function gfs2_get_block_noalloc determined that the (hole-punched) block was not mapped, so it returned -EIO to generic_writepages, which passed it back to gfs2_ail1_start_one. This, in turn, performed a withdraw, assuming there was a real IO error writing to the journal. This might be a valid error when writing metadata to the journal, but for journaled data writes, it does not warrant a withdraw. This patch adds a check to function gfs2_block_map that makes an exception for journaled data writes that correspond to jdata holes: If the iomap get function returns a block type of IOMAP_HOLE, it instead returns -ENODATA which does not cause the withdraw. Other errors are returned as before. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>