summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2022-04-24io_uring: minor refactoring for some tw handlersPavel Begunkov
Get rid of some useless local variables Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/7798327b684b7015f7e4300420142ddfcd317297.1650056133.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: clean poll tw PF_EXITING handlingPavel Begunkov
When we meet PF_EXITING in io_poll_check_events(), don't overcomplicate the code with io_poll_mark_cancelled() but just return -ECANCELED and the callers will deal with the rest. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/f0cc981af82a5b193658f8f44397eeb3bf838b7b.1650056133.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: optimise io_get_cqe()Pavel Begunkov
io_get_cqe() is expensive because of a bunch of loads, masking, etc. However, most of the time we should have enough of entries in the CQ, so we can cache two pointers representing a range of contiguous CQE memory we can use. When the range is exhausted we'll go through a slower path to set up a new range. When there are no CQEs avaliable, pointers will naturally point to the same address. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/487eeef00f3146537b3d9c1a9cef2fc0b9a86f81.1649771823.git.asml.silence@gmail.com [axboe: santinel -> sentinel] Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: optimise submission left countingPavel Begunkov
Considering all inlining io_submit_sqe() is huge and usually ends up calling some other functions. We decrement @left in io_submit_sqes() just before calling io_submit_sqe() and use it later after the call. Considering how huge io_submit_sqe() is, there is not much hope @left will be treated gracefully by compilers. Decrement it after the call, not only it's easier on register spilling and probably saves stack write/read, but also at least for x64 uses CPU flags set by the dec instead of doing (read/write and tests). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/807f9a276b54ee8ff4e42e2b78721484f1c71743.1649771823.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: optimise submission loop invariantPavel Begunkov
Instead of keeping @submitted in io_submit_sqes(), which for each iteration requires comparison with the initial number of SQEs, store the number of SQEs left to submit. We'll need nr only for when we're done with SQE handling. note: if we can't allocate a req for the first SQE we always has been returning -EAGAIN to the userspace, save this behaviour by looking into the cache in a slow path. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/c3b3df9aeae4c2f7a53fd8386385742e4e261e77.1649771823.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: add helper to return req to cache listPavel Begunkov
Don't hand code wq_stack_add_head() to ->free_list, which serves for recycling io_kiocb, add a helper doing it for us. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/f206f575486a8dd3d52f074ab37ed146b2d215b7.1649771823.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: helper for empty req cache checksPavel Begunkov
Add io_req_cache_empty(), which checks if there are requests in the inline req cache or not. It'll be needed in the future, but also nicely cleans up a few spots poking into ->free_list directly. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/b18662389f3fb483d0bd07906647f65f6037475a.1649771823.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: inline io_flush_cached_reqsPavel Begunkov
io_flush_cached_reqs() isn't descriptive and has only one caller, inline it into __io_alloc_req_refill(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/ec38abe65a883d9fe6b169793119ce86806655a4.1649771823.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: shrink final link flushPavel Begunkov
All good users should not set IOSQE_IO_*LINK flags for the last request of a link. io_uring flushes collected links at the end of submission, but it's not the optimal way and so we don't care too much about it. Replace io_queue_sqe() call with io_queue_sqe_fallback() as the former one is inlined and will generate a bunch of extra code. This will also help compilers with the submission path inlining. > size ./fs/io_uring.o text data bss dec hex filename 87265 13734 8 101007 18a8f ./fs/io_uring.o > size ./fs/io_uring.o text data bss dec hex filename 87073 13734 8 100815 189cf ./fs/io_uring.o Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/01fb5e417ef49925d544a0b0bae30409845ed2b4.1649771823.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: memcpy CQE from reqPavel Begunkov
We can do CQE filling a bit more efficiently when req->cqe is fully filled by memcpy()'ing it to the userspace instead of doing it field by field. It's easier on register spilling, removes a couple of extra loads/stores and write combines two u32 memory writes. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/ee3f514ff28b1fe3347a8eca93a9d91647f2eaad.1649771823.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: explicitly keep a CQE in io_kiocbPavel Begunkov
We already have req->{result,user_data,cflags}, which mimic struct io_uring_cqe and are intended to store CQE data. Combine them into a struct io_uring_cqe field. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/e1efe65d5005cd6a9ec3440767eb15a9fa9351cf.1649771823.git.asml.silence@gmail.com [axboe: add mirror cqe to cater to fd union] Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: rename io_sqe_file_registerPavel Begunkov
Rename io_sqe_file_register(), so the name better reflects what the function is doing. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/d5091518883786969e244d2f0854a47bbdaa5061.1649334991.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: deduplicate SCM accountingPavel Begunkov
Merge io_sqe_file_register() and io_sqe_file_register(). The only real difference left between them is from where we get an skb. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/dddda3039c71fcbec24b3465cbe8c7e7ae7bb0e8.1649334991.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: don't pass around fixed index for scmPavel Begunkov
There is an old API nuisance where io_uring's SCM accounting functions traverse fixed file tables and so requires them to be set in advance, which leads to some implicit rules of how io_sqe_file_register() should be used. __io_sqe_files_scm() now works with only one file at a time, pass a file directly and get rid of all fixed table dereferencing inside. Clean io_sqe_file_register() callers. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/fb32031d892e61a7748c70da7999725d5e798671.1649334991.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: refactor __io_sqe_files_scmPavel Begunkov
__io_sqe_files_scm() is now called only from one place passing a single file, so nr argument can be killed and __io_sqe_files_scm() simplified. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/66b492bc66dc8356d45d64076bb31d677d11a7c9.1649334991.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: uniform SCM accountingPavel Begunkov
Channel all SCM accounting through io_sqe_file_register(), so we do it uniformely for updates and initial registration and can kill duplicated code. Registration might be slightly slower in some case, but first we skip most of SCM accounting now so it's not a problem. Moreover, it's nicer for an empty set registration as we don't even try to allocate skb for them anymore. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/6c9afbeb22812777d0c43e52353b63db5b87ed1e.1649334991.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: don't scm-account for non af_unix socketsPavel Begunkov
io_uring deals with file reference loops by registering all fixed files in the SCM/GC infrastrucure. However, only a small subset of all file types can keep long-term references to other files and those that don't are not interesting for the garbage collector as they can't be in a reference loop. They neither can be directly recycled by GC nor affect loop searching. Let's skip io_uring SCM accounting for loop-less files, i.e. all but af_unix sockets, quite imroving fixed file updates performance and greatly helpnig with memory footprint. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/9c44ecf6e89d69130a8c4360cce2183ffc5ddd6f.1649277098.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: move finish_wait() outside of loop in cqring_wait()Jens Axboe
We don't need to call this for every loop. This is particularly troublesome if we are task_work intensive, and get woken more often than we desire due to that. Just do it at the end, that's always safe as we initialize the waitqueue list head anyway. This can save a considerable amount of hammering on the waitqueue lock, which is also hot from the request completion side. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: refactor io_req_add_compl_list()Pavel Begunkov
A small refactoring for io_req_add_compl_list() deduplicating some code. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/f0a5272b45efe4ffc41cb79b99784e39c699aade.1648209006.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: silence io_for_each_link() warningPavel Begunkov
Some tooling keep complaining about self assignment in io_for_each_link(), the code is correct but still let's workaround it. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/f0de77b0b0f8309554ba6fba34327b7813bcc3ff.1648209006.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: partially uninline io_put_task()Pavel Begunkov
In most cases io_put_task() is called from the submitter task and go through a higly optimised fast path, which has to be inlined. The other branch though is bulkier and we don't care about it as much because it implies atomics and other heavy calls. Extract it into a helper, which is expected not to be inlined. [before] size ./fs/io_uring.o text data bss dec hex filename 89328 13646 8 102982 19246 ./fs/io_uring.o [after] size ./fs/io_uring.o text data bss dec hex filename 89096 13646 8 102750 1915e ./fs/io_uring.o Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/dec213db0e0b8605132da81e0a0be687a4d140cb.1648209006.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: cleanup conditional submit lockingPavel Begunkov
Refactor io_ring_submit_[un]lock(), make it accept issue_flags and remove manual IO_URING_F_UNLOCKED checks. It also allows us to place lockdep annotations inside instead of sprinkling them in a bunch of places. There is only one user that doesn't fit now, so hand code locking in __io_rsrc_put_work(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/e55c2c06767676a801252e8094c9ab09912487a4.1648209006.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: optimise mutex locking for submit+iopollPavel Begunkov
Both submittion and iopolling requires holding uring_lock. IOPOLL can users do them together in a single syscall, however it would still do 2 pairs of lock/unlock. Optimise this case combining locking into one lock/unlock pair, which especially nice for low QD. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/034b6c41658648ad3ad3c9485ac8eb546f010bc4.1647957378.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: pre-calculate syscall iopolling decisionPavel Begunkov
Syscall should only iopoll for events when it's a IOPOLL ring and is not SQPOLL. Instead of check both flags every time we can save it in ring flags so it's easier to use. We don't care much about an extra if there, however it will be inconvenient to copy-paste this chunk with checks in future patches. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/7fd2f8fc2606305aa06dd8c0ff8f76a66b39c383.1647957378.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: split off IOPOLL argument verifictionPavel Begunkov
IOPOLL doesn't use additional arguments like sigsets, but it still needs some basic verification, which is currently done by io_get_ext_arg(). This patch adds a separate function for the IOPOLL path, which is a bit simpler and doesn't do extra. This prepares us for further patches, which would have hurt inlining in the hot path otherwise. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/71b23fca412e3374b74be7711cfd42a3d9d5dfe0.1647957378.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: clean up io_queue_next()Pavel Begunkov
Move fast check out of io_queue_next(), it makes req->flags checks in __io_submit_flush_completions() a bit clearer and grants us better comtrol, e.g. can remove now not justified unlikely() in __io_submit_flush_completions(). Also, we don't care about having this check in io_free_req() as the function is a slow path and io_req_find_next() handles it correctly. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/1f9e1cc80adbb11b37017d511df4a2c6141a3f08.1647897811.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: move poll recycling later in compl flushingPavel Begunkov
There is a new (req->flags & REQ_F_POLLED) check in __io_submit_flush_completions() for poll recycling, however io_free_batch_list() is a much better place for it. First, we prefer it after putting the last req ref just to avoid potential problems in the future. Also, it'll enable the recycling for IOPOLL and also will place it closer to all other req->flags bits clean up requests. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/31dfe1dafda66ba3ce36b301884ec7e162c777d1.1647897811.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: optimise io_free_batch_listPavel Begunkov
We do several req->flags checks in the fast path of io_free_batch_list(). One explicit check of REQ_F_REFCOUNT, and two other hidden in io_queue_next() and io_dismantle_req(). Moreover, there is a io_req_put_rsrc_locked() call in between, so there is no hope req->flags will be preserved in registers. All those flags if not a slow path than definitely a slower path, so put them all under a single flags mask check and save several mem reloads and ifs. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/0fb493f73f2009aea395c570c2932fecaa4e1244.1647897811.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: refactor io_req_find_nextPavel Begunkov
Move the fast path from io_req_find_next() into callers. It prepares us for further changes. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/10bd0e564472dde0c7f8d90ae317d05356cd565a.1647897811.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: remove extra ifs around io_commit_cqringPavel Begunkov
Now io_commit_cqring() is simple and it tolerates well being called without a new CQE filled, so kill a bunch of not needed anymore guards. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/36aed692dff402bba00a444a63a9cd2e97a340ea.1647897811.git.asml.silence@gmail.com [axboe: fold in followup fix] Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-24io_uring: small optimisation of tctx_task_workPavel Begunkov
There should be no completions stashed when we first get into tctx_task_work(), so move completion flushing checks a bit later after we had a chance to execute some task works. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/c6765c804f3c438591b9825ab9c43d22039073c4.1647897811.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-23Merge tag '5.18-rc3-ksmbd-fixes' of git://git.samba.org/ksmbdLinus Torvalds
Pull ksmbd server fixes from Steve French: - cap maximum sector size reported to avoid mount problems - reference count fix - fix filename rename race * tag '5.18-rc3-ksmbd-fixes' of git://git.samba.org/ksmbd: ksmbd: set fixed sector size to FS_SECTOR_SIZE_INFORMATION ksmbd: increment reference count of parent fp ksmbd: remove filename in ksmbd_file
2022-04-23Merge tag 'io_uring-5.18-2022-04-22' of git://git.kernel.dk/linux-blockLinus Torvalds
Pull io_uring fixes from Jens Axboe: "Just two small fixes - one fixing a potential leak for the iovec for larger requests added in this cycle, and one fixing a theoretical leak with CQE_SKIP and IOPOLL" * tag 'io_uring-5.18-2022-04-22' of git://git.kernel.dk/linux-block: io_uring: fix leaks on IOPOLL and CQE_SKIP io_uring: free iovec if file assignment fails
2022-04-22Merge tag 'ext4_for_linus_stable' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4 Pull ext4 fixes from Ted Ts'o: "Fix some syzbot-detected bugs, as well as other bugs found by I/O injection testing. Change ext4's fallocate to consistently drop set[ug]id bits when an fallocate operation might possibly change the user-visible contents of a file. Also, improve handling of potentially invalid values in the the s_overhead_cluster superblock field to avoid ext4 returning a negative number of free blocks" * tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: jbd2: fix a potential race while discarding reserved buffers after an abort ext4: update the cached overhead value in the superblock ext4: force overhead calculation if the s_overhead_cluster makes no sense ext4: fix overhead calculation to account for the reserved gdt blocks ext4, doc: fix incorrect h_reserved size ext4: limit length to bitmap_maxbytes - blocksize in punch_hole ext4: fix use-after-free in ext4_search_dir ext4: fix bug_on in start_this_handle during umount filesystem ext4: fix symlink file size not match to file content ext4: fix fallocate to use file_modified to update permissions consistently
2022-04-22Merge tag '5.18-rc3-smb3-fixes' of git://git.samba.org/sfrench/cifs-2.6Linus Torvalds
Pull cifs fixes from Steve French: "Four fixes, two of them for stable: - fcollapse fix - reconnect lock fix - DFS oops fix - minor cleanup patch" * tag '5.18-rc3-smb3-fixes' of git://git.samba.org/sfrench/cifs-2.6: cifs: destage any unwritten data to the server before calling copychunk_write cifs: use correct lock type in cifs_reconnect() cifs: fix NULL ptr dereference in refresh_mounts() cifs: Use kzalloc instead of kmalloc/memset
2022-04-22Merge tag 'fs.fixes.v5.18-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux Pull mount_setattr fix from Christian Brauner: "The recent cleanup in e257039f0fc7 ("mount_setattr(): clean the control flow and calling conventions") switched the mount attribute codepaths from do-while to for loops as they are more idiomatic when walking mounts. However, we did originally choose do-while constructs because if we request a mount or mount tree to be made read-only we need to hold writers in the following way: The mount attribute code will grab lock_mount_hash() and then call mnt_hold_writers() which will _unconditionally_ set MNT_WRITE_HOLD on the mount. Any callers that need write access have to call mnt_want_write(). They will immediately see that MNT_WRITE_HOLD is set on the mount and the caller will then either spin (on non-preempt-rt) or wait on lock_mount_hash() (on preempt-rt). The fact that MNT_WRITE_HOLD is set unconditionally means that once mnt_hold_writers() returns we need to _always_ pair it with mnt_unhold_writers() in both the failure and success paths. The do-while constructs did take care of this. But Al's change to a for loop in the failure path stops on the first mount we failed to change mount attributes _without_ going into the loop to call mnt_unhold_writers(). This in turn means that once we failed to make a mount read-only via mount_setattr() - i.e. there are already writers on that mount - we will block any writers indefinitely. Fix this by ensuring that the for loop always unsets MNT_WRITE_HOLD including the first mount we failed to change to read-only. Also sprinkle a few comments into the cleanup code to remind people about what is happening including myself. After all, I didn't catch it during review. This is only relevant on mainline and was reported by syzbot. Details about the syzbot reports are all in the commit message" * tag 'fs.fixes.v5.18-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux: fs: unset MNT_WRITE_HOLD on failure
2022-04-22binfmt_flat: Remove shared library supportEric W. Biederman
In a recent discussion[1] it was reported that the binfmt_flat library support was only ever used on m68k and even on m68k has not been used in a very long time. The structure of binfmt_flat is different from all of the other binfmt implementations because of this shared library support and it made life and code review more effort when I refactored the code in fs/exec.c. Since in practice the code is dead remove the binfmt_flat shared library support and make maintenance of the code easier. [1] https://lkml.kernel.org/r/81788b56-5b15-7308-38c7-c7f2502c4e15@linux-m68k.org Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> Acked-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARM Tested-by: Patrice Chotard <patrice.chotard@foss.st.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/87levzzts4.fsf_-_@email.froward.int.ebiederm.org
2022-04-21mm, hugetlb: allow for "high" userspace addressesChristophe Leroy
This is a fix for commit f6795053dac8 ("mm: mmap: Allow for "high" userspace addresses") for hugetlb. This patch adds support for "high" userspace addresses that are optionally supported on the system and have to be requested via a hint mechanism ("high" addr parameter to mmap). Architectures such as powerpc and x86 achieve this by making changes to their architectural versions of hugetlb_get_unmapped_area() function. However, arm64 uses the generic version of that function. So take into account arch_get_mmap_base() and arch_get_mmap_end() in hugetlb_get_unmapped_area(). To allow that, move those two macros out of mm/mmap.c into include/linux/sched/mm.h If these macros are not defined in architectural code then they default to (TASK_SIZE) and (base) so should not introduce any behavioural changes to architectures that do not define them. For the time being, only ARM64 is affected by this change. Catalin (ARM64) said "We should have fixed hugetlb_get_unmapped_area() as well when we added support for 52-bit VA. The reason for commit f6795053dac8 was to prevent normal mmap() from returning addresses above 48-bit by default as some user-space had hard assumptions about this. It's a slight ABI change if you do this for hugetlb_get_unmapped_area() but I doubt anyone would notice. It's more likely that the current behaviour would cause issues, so I'd rather have them consistent. Basically when arm64 gained support for 52-bit addresses we did not want user-space calling mmap() to suddenly get such high addresses, otherwise we could have inadvertently broken some programs (similar behaviour to x86 here). Hence we added commit f6795053dac8. But we missed hugetlbfs which could still get such high mmap() addresses. So in theory that's a potential regression that should have bee addressed at the same time as commit f6795053dac8 (and before arm64 enabled 52-bit addresses)" Link: https://lkml.kernel.org/r/ab847b6edb197bffdfe189e70fb4ac76bfe79e0d.1650033747.git.christophe.leroy@csgroup.eu Fixes: f6795053dac8 ("mm: mmap: Allow for "high" userspace addresses") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> [5.0.x] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-04-21f2fs: should not truncate blocks during roll-forward recoveryJaegeuk Kim
If the file preallocated blocks and fsync'ed, we should not truncate them during roll-forward recovery which will recover i_size correctly back. Fixes: d4dd19ec1ea0 ("f2fs: do not expose unwritten blocks to user by DIO") Cc: <stable@vger.kernel.org> # 5.17+ Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2022-04-21fs/proc: Introduce list_for_each_table_entry for proc sysctlMeng Tang
Use the list_for_each_table_entry macro to optimize the scenario of traverse ctl_table. This make the code neater and easier to understand. Suggested-by: Davidlohr Bueso<dave@stgolabs.net> Signed-off-by: Meng Tang <tangmeng@uniontech.com> [updated the sysctl_check_table() hunk due to some changes upstream] Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-04-21jbd2: fix a potential race while discarding reserved buffers after an abortYe Bin
we got issue as follows: [ 72.796117] EXT4-fs error (device sda): ext4_journal_check_start:83: comm fallocate: Detected aborted journal [ 72.826847] EXT4-fs (sda): Remounting filesystem read-only fallocate: fallocate failed: Read-only file system [ 74.791830] jbd2_journal_commit_transaction: jh=0xffff9cfefe725d90 bh=0x0000000000000000 end delay [ 74.793597] ------------[ cut here ]------------ [ 74.794203] kernel BUG at fs/jbd2/transaction.c:2063! [ 74.794886] invalid opcode: 0000 [#1] PREEMPT SMP PTI [ 74.795533] CPU: 4 PID: 2260 Comm: jbd2/sda-8 Not tainted 5.17.0-rc8-next-20220315-dirty #150 [ 74.798327] RIP: 0010:__jbd2_journal_unfile_buffer+0x3e/0x60 [ 74.801971] RSP: 0018:ffffa828c24a3cb8 EFLAGS: 00010202 [ 74.802694] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [ 74.803601] RDX: 0000000000000001 RSI: ffff9cfefe725d90 RDI: ffff9cfefe725d90 [ 74.804554] RBP: ffff9cfefe725d90 R08: 0000000000000000 R09: ffffa828c24a3b20 [ 74.805471] R10: 0000000000000001 R11: 0000000000000001 R12: ffff9cfefe725d90 [ 74.806385] R13: ffff9cfefe725d98 R14: 0000000000000000 R15: ffff9cfe833a4d00 [ 74.807301] FS: 0000000000000000(0000) GS:ffff9d01afb00000(0000) knlGS:0000000000000000 [ 74.808338] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 74.809084] CR2: 00007f2b81bf4000 CR3: 0000000100056000 CR4: 00000000000006e0 [ 74.810047] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 74.810981] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 74.811897] Call Trace: [ 74.812241] <TASK> [ 74.812566] __jbd2_journal_refile_buffer+0x12f/0x180 [ 74.813246] jbd2_journal_refile_buffer+0x4c/0xa0 [ 74.813869] jbd2_journal_commit_transaction.cold+0xa1/0x148 [ 74.817550] kjournald2+0xf8/0x3e0 [ 74.819056] kthread+0x153/0x1c0 [ 74.819963] ret_from_fork+0x22/0x30 Above issue may happen as follows: write truncate kjournald2 generic_perform_write ext4_write_begin ext4_walk_page_buffers do_journal_get_write_access ->add BJ_Reserved list ext4_journalled_write_end ext4_walk_page_buffers write_end_fn ext4_handle_dirty_metadata ***************JBD2 ABORT************** jbd2_journal_dirty_metadata -> return -EROFS, jh in reserved_list jbd2_journal_commit_transaction while (commit_transaction->t_reserved_list) jh = commit_transaction->t_reserved_list; truncate_pagecache_range do_invalidatepage ext4_journalled_invalidatepage jbd2_journal_invalidatepage journal_unmap_buffer __dispose_buffer __jbd2_journal_unfile_buffer jbd2_journal_put_journal_head ->put last ref_count __journal_remove_journal_head bh->b_private = NULL; jh->b_bh = NULL; jbd2_journal_refile_buffer(journal, jh); bh = jh2bh(jh); ->bh is NULL, later will trigger null-ptr-deref journal_free_journal_head(jh); After commit 96f1e0974575, we no longer hold the j_state_lock while iterating over the list of reserved handles in jbd2_journal_commit_transaction(). This potentially allows the journal_head to be freed by journal_unmap_buffer while the commit codepath is also trying to free the BJ_Reserved buffers. Keeping j_state_lock held while trying extends hold time of the lock minimally, and solves this issue. Fixes: 96f1e0974575("jbd2: avoid long hold times of j_state_lock while committing a transaction") Signed-off-by: Ye Bin <yebin10@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20220317142137.1821590-1-yebin10@huawei.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-04-21fs: unset MNT_WRITE_HOLD on failureChristian Brauner
After mnt_hold_writers() has been called we will always have set MNT_WRITE_HOLD and consequently we always need to pair mnt_hold_writers() with mnt_unhold_writers(). After the recent cleanup in [1] where Al switched from a do-while to a for loop the cleanup currently fails to unset MNT_WRITE_HOLD for the first mount that was changed. Fix this and make sure that the first mount will be cleaned up and add some comments to make it more obvious. Link: https://lore.kernel.org/lkml/0000000000007cc21d05dd0432b8@google.com Link: https://lore.kernel.org/lkml/00000000000080e10e05dd043247@google.com Link: https://lore.kernel.org/r/20220420131925.2464685-1-brauner@kernel.org Fixes: e257039f0fc7 ("mount_setattr(): clean the control flow and calling conventions") [1] Cc: Hillf Danton <hdanton@sina.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Reported-by: syzbot+10a16d1c43580983f6a2@syzkaller.appspotmail.com Reported-by: syzbot+306090cfa3294f0bbfb3@syzkaller.appspotmail.com Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christian Brauner (Microsoft) <brauner@kernel.org>
2022-04-21btrfs: zoned: use dedicated lock for data relocationNaohiro Aota
Currently, we use btrfs_inode_{lock,unlock}() to grant an exclusive writeback of the relocation data inode in btrfs_zoned_data_reloc_{lock,unlock}(). However, that can cause a deadlock in the following path. Thread A takes btrfs_inode_lock() and waits for metadata reservation by e.g, waiting for writeback: prealloc_file_extent_cluster() - btrfs_inode_lock(&inode->vfs_inode, 0); - btrfs_prealloc_file_range() ... - btrfs_replace_file_extents() - btrfs_start_transaction ... - btrfs_reserve_metadata_bytes() Thread B (e.g, doing a writeback work) needs to wait for the inode lock to continue writeback process: do_writepages - btrfs_writepages - extent_writpages - btrfs_zoned_data_reloc_lock(BTRFS_I(inode)); - btrfs_inode_lock() The deadlock is caused by relying on the vfs_inode's lock. By using it, we introduced unnecessary exclusion of writeback and btrfs_prealloc_file_range(). Also, the lock at this point is useless as we don't have any dirty pages in the inode yet. Introduce fs_info->zoned_data_reloc_io_lock and use it for the exclusive writeback. Fixes: 35156d852762 ("btrfs: zoned: only allow one process to add pages to a relocation inode") CC: stable@vger.kernel.org # 5.16.x: 869f4cdc73f9: btrfs: zoned: encapsulate inode locking for zoned relocation CC: stable@vger.kernel.org # 5.16.x CC: stable@vger.kernel.org # 5.17 Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-04-21btrfs: fix assertion failure during scrub due to block group reallocationFilipe Manana
During a scrub, or device replace, we can race with block group removal and allocation and trigger the following assertion failure: [7526.385524] assertion failed: cache->start == chunk_offset, in fs/btrfs/scrub.c:3817 [7526.387351] ------------[ cut here ]------------ [7526.387373] kernel BUG at fs/btrfs/ctree.h:3599! [7526.388001] invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI [7526.388970] CPU: 2 PID: 1158150 Comm: btrfs Not tainted 5.17.0-rc8-btrfs-next-114 #4 [7526.390279] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 [7526.392430] RIP: 0010:assertfail.constprop.0+0x18/0x1a [btrfs] [7526.393520] Code: f3 48 c7 c7 20 (...) [7526.396926] RSP: 0018:ffffb9154176bc40 EFLAGS: 00010246 [7526.397690] RAX: 0000000000000048 RBX: ffffa0db8a910000 RCX: 0000000000000000 [7526.398732] RDX: 0000000000000000 RSI: ffffffff9d7239a2 RDI: 00000000ffffffff [7526.399766] RBP: ffffa0db8a911e10 R08: ffffffffa71a3ca0 R09: 0000000000000001 [7526.400793] R10: 0000000000000001 R11: 0000000000000000 R12: ffffa0db4b170800 [7526.401839] R13: 00000003494b0000 R14: ffffa0db7c55b488 R15: ffffa0db8b19a000 [7526.402874] FS: 00007f6c99c40640(0000) GS:ffffa0de6d200000(0000) knlGS:0000000000000000 [7526.404038] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [7526.405040] CR2: 00007f31b0882160 CR3: 000000014b38c004 CR4: 0000000000370ee0 [7526.406112] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [7526.407148] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [7526.408169] Call Trace: [7526.408529] <TASK> [7526.408839] scrub_enumerate_chunks.cold+0x11/0x79 [btrfs] [7526.409690] ? do_wait_intr_irq+0xb0/0xb0 [7526.410276] btrfs_scrub_dev+0x226/0x620 [btrfs] [7526.410995] ? preempt_count_add+0x49/0xa0 [7526.411592] btrfs_ioctl+0x1ab5/0x36d0 [btrfs] [7526.412278] ? __fget_files+0xc9/0x1b0 [7526.412825] ? kvm_sched_clock_read+0x14/0x40 [7526.413459] ? lock_release+0x155/0x4a0 [7526.414022] ? __x64_sys_ioctl+0x83/0xb0 [7526.414601] __x64_sys_ioctl+0x83/0xb0 [7526.415150] do_syscall_64+0x3b/0xc0 [7526.415675] entry_SYSCALL_64_after_hwframe+0x44/0xae [7526.416408] RIP: 0033:0x7f6c99d34397 [7526.416931] Code: 3c 1c e8 1c ff (...) [7526.419641] RSP: 002b:00007f6c99c3fca8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [7526.420735] RAX: ffffffffffffffda RBX: 00005624e1e007b0 RCX: 00007f6c99d34397 [7526.421779] RDX: 00005624e1e007b0 RSI: 00000000c400941b RDI: 0000000000000003 [7526.422820] RBP: 0000000000000000 R08: 00007f6c99c40640 R09: 0000000000000000 [7526.423906] R10: 00007f6c99c40640 R11: 0000000000000246 R12: 00007fff746755de [7526.424924] R13: 00007fff746755df R14: 0000000000000000 R15: 00007f6c99c40640 [7526.425950] </TASK> That assertion is relatively new, introduced with commit d04fbe19aefd2 ("btrfs: scrub: cleanup the argument list of scrub_chunk()"). The block group we get at scrub_enumerate_chunks() can actually have a start address that is smaller then the chunk offset we extracted from a device extent item we got from the commit root of the device tree. This is very rare, but it can happen due to a race with block group removal and allocation. For example, the following steps show how this can happen: 1) We are at transaction T, and we have the following blocks groups, sorted by their logical start address: [ bg A, start address A, length 1G (data) ] [ bg B, start address B, length 1G (data) ] (...) [ bg W, start address W, length 1G (data) ] --> logical address space hole of 256M, there used to be a 256M metadata block group here [ bg Y, start address Y, length 256M (metadata) ] --> Y matches W's end offset + 256M Block group Y is the block group with the highest logical address in the whole filesystem; 2) Block group Y is deleted and its extent mapping is removed by the call to remove_extent_mapping() made from btrfs_remove_block_group(). So after this point, the last element of the mapping red black tree, its rightmost node, is the mapping for block group W; 3) While still at transaction T, a new data block group is allocated, with a length of 1G. When creating the block group we do a call to find_next_chunk(), which returns the logical start address for the new block group. This calls returns X, which corresponds to the end offset of the last block group, the rightmost node in the mapping red black tree (fs_info->mapping_tree), plus one. So we get a new block group that starts at logical address X and with a length of 1G. It spans over the whole logical range of the old block group Y, that was previously removed in the same transaction. However the device extent allocated to block group X is not the same device extent that was used by block group Y, and it also does not overlap that extent, which must be always the case because we allocate extents by searching through the commit root of the device tree (otherwise it could corrupt a filesystem after a power failure or an unclean shutdown in general), so the extent allocator is behaving as expected; 4) We have a task running scrub, currently at scrub_enumerate_chunks(). There it searches for device extent items in the device tree, using its commit root. It finds a device extent item that was used by block group Y, and it extracts the value Y from that item into the local variable 'chunk_offset', using btrfs_dev_extent_chunk_offset(); It then calls btrfs_lookup_block_group() to find block group for the logical address Y - since there's currently no block group that starts at that logical address, it returns block group X, because its range contains Y. This results in triggering the assertion: ASSERT(cache->start == chunk_offset); right before calling scrub_chunk(), as cache->start is X and chunk_offset is Y. This is more likely to happen of filesystems not larger than 50G, because for these filesystems we use a 256M size for metadata block groups and a 1G size for data block groups, while for filesystems larger than 50G, we use a 1G size for both data and metadata block groups (except for zoned filesystems). It could also happen on any filesystem size due to the fact that system block groups are always smaller (32M) than both data and metadata block groups, but these are not frequently deleted, so much less likely to trigger the race. So make scrub skip any block group with a start offset that is less than the value we expect, as that means it's a new block group that was created in the current transaction. It's pointless to continue and try to scrub its extents, because scrub searches for extents using the commit root, so it won't find any. For a device replace, skip it as well for the same reasons, and we don't need to worry about the possibility of extents of the new block group not being to the new device, because we have the write duplication setup done through btrfs_map_block(). Fixes: d04fbe19aefd ("btrfs: scrub: cleanup the argument list of scrub_chunk()") CC: stable@vger.kernel.org # 5.17 Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-04-21Merge tag 'large-extent-counters-v9' of https://github.com/chandanr/linux ↵Dave Chinner
into xfs-5.19-for-next xfs: Large extent counters The commit xfs: fix inode fork extent count overflow (3f8a4f1d876d3e3e49e50b0396eaffcc4ba71b08) mentions that 10 billion data fork extents should be possible to create. However the corresponding on-disk field has a signed 32-bit type. Hence this patchset extends the per-inode data fork extent counter to 64 bits (out of which 48 bits are used to store the extent count). Also, XFS has an attribute fork extent counter which is 16 bits wide. A workload that, 1. Creates 1 million 255-byte sized xattrs, 2. Deletes 50% of these xattrs in an alternating manner, 3. Tries to insert 400,000 new 255-byte sized xattrs causes the xattr extent counter to overflow. Dave tells me that there are instances where a single file has more than 100 million hardlinks. With parent pointers being stored in xattrs, we will overflow the signed 16-bits wide attribute extent counter when large number of hardlinks are created. Hence this patchset extends the on-disk field to 32-bits. The following changes are made to accomplish this, 1. A 64-bit inode field is carved out of existing di_pad and di_flushiter fields to hold the 64-bit data fork extent counter. 2. The existing 32-bit inode data fork extent counter will be used to hold the attribute fork extent counter. 3. A new incompat superblock flag to prevent older kernels from mounting the filesystem. Signed-off-by: Chandan Babu R <chandan.babu@oracle.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2022-04-21Merge branch 'guilt/xlog-write-rework' into xfs-5.19-for-nextDave Chinner
2022-04-21Merge branch 'guilt/xfs-unsigned-flags-5.18' into xfs-5.19-for-nextDave Chinner
2022-04-20cifs: destage any unwritten data to the server before calling copychunk_writeRonnie Sahlberg
because the copychunk_write might cover a region of the file that has not yet been sent to the server and thus fail. A simple way to reproduce this is: truncate -s 0 /mnt/testfile; strace -f -o x -ttT xfs_io -i -f -c 'pwrite 0k 128k' -c 'fcollapse 16k 24k' /mnt/testfile the issue is that the 'pwrite 0k 128k' becomes rearranged on the wire with the 'fcollapse 16k 24k' due to write-back caching. fcollapse is implemented in cifs.ko as a SMB2 IOCTL(COPYCHUNK_WRITE) call and it will fail serverside since the file is still 0b in size serverside until the writes have been destaged. To avoid this we must ensure that we destage any unwritten data to the server before calling COPYCHUNK_WRITE. Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1997373 Reported-by: Xiaoli Feng <xifeng@redhat.com> Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2022-04-20cifs: use correct lock type in cifs_reconnect()Paulo Alcantara
TCP_Server_Info::origin_fullpath and TCP_Server_Info::leaf_fullpath are protected by refpath_lock mutex and not cifs_tcp_ses_lock spinlock. Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Cc: stable@vger.kernel.org Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2022-04-20cifs: fix NULL ptr dereference in refresh_mounts()Paulo Alcantara
Either mount(2) or automount might not have server->origin_fullpath set yet while refresh_cache_worker() is attempting to refresh DFS referrals. Add missing NULL check and locking around it. This fixes bellow crash: [ 1070.276835] general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN NOPTI [ 1070.277676] KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] [ 1070.278219] CPU: 1 PID: 8506 Comm: kworker/u8:1 Not tainted 5.18.0-rc3 #10 [ 1070.278701] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.15.0-0-g2dd4b9b-rebuilt.opensuse.org 04/01/2014 [ 1070.279495] Workqueue: cifs-dfscache refresh_cache_worker [cifs] [ 1070.280044] RIP: 0010:strcasecmp+0x34/0x150 [ 1070.280359] Code: 00 00 00 fc ff df 41 54 55 48 89 fd 53 48 83 ec 10 eb 03 4c 89 fe 48 89 ef 48 83 c5 01 48 89 f8 48 89 fa 48 c1 e8 03 83 e2 07 <42> 0f b6 04 28 38 d0 7f 08 84 c0 0f 85 bc 00 00 00 0f b6 45 ff 44 [ 1070.281729] RSP: 0018:ffffc90008367958 EFLAGS: 00010246 [ 1070.282114] RAX: 0000000000000000 RBX: dffffc0000000000 RCX: 0000000000000000 [ 1070.282691] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 [ 1070.283273] RBP: 0000000000000001 R08: 0000000000000000 R09: ffffffff873eda27 [ 1070.283857] R10: ffffc900083679a0 R11: 0000000000000001 R12: ffff88812624c000 [ 1070.284436] R13: dffffc0000000000 R14: ffff88810e6e9a88 R15: ffff888119bb9000 [ 1070.284990] FS: 0000000000000000(0000) GS:ffff888151200000(0000) knlGS:0000000000000000 [ 1070.285625] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1070.286100] CR2: 0000561a4d922418 CR3: 000000010aecc000 CR4: 0000000000350ee0 [ 1070.286683] Call Trace: [ 1070.286890] <TASK> [ 1070.287070] refresh_cache_worker+0x895/0xd20 [cifs] [ 1070.287475] ? __refresh_tcon.isra.0+0xfb0/0xfb0 [cifs] [ 1070.287905] ? __lock_acquire+0xcd1/0x6960 [ 1070.288247] ? is_dynamic_key+0x1a0/0x1a0 [ 1070.288591] ? lockdep_hardirqs_on_prepare+0x410/0x410 [ 1070.289012] ? lock_downgrade+0x6f0/0x6f0 [ 1070.289318] process_one_work+0x7bd/0x12d0 [ 1070.289637] ? worker_thread+0x160/0xec0 [ 1070.289970] ? pwq_dec_nr_in_flight+0x230/0x230 [ 1070.290318] ? _raw_spin_lock_irq+0x5e/0x90 [ 1070.290619] worker_thread+0x5ac/0xec0 [ 1070.290891] ? process_one_work+0x12d0/0x12d0 [ 1070.291199] kthread+0x2a5/0x350 [ 1070.291430] ? kthread_complete_and_exit+0x20/0x20 [ 1070.291770] ret_from_fork+0x22/0x30 [ 1070.292050] </TASK> [ 1070.292223] Modules linked in: bpfilter cifs cifs_arc4 cifs_md4 [ 1070.292765] ---[ end trace 0000000000000000 ]--- [ 1070.293108] RIP: 0010:strcasecmp+0x34/0x150 [ 1070.293471] Code: 00 00 00 fc ff df 41 54 55 48 89 fd 53 48 83 ec 10 eb 03 4c 89 fe 48 89 ef 48 83 c5 01 48 89 f8 48 89 fa 48 c1 e8 03 83 e2 07 <42> 0f b6 04 28 38 d0 7f 08 84 c0 0f 85 bc 00 00 00 0f b6 45 ff 44 [ 1070.297718] RSP: 0018:ffffc90008367958 EFLAGS: 00010246 [ 1070.298622] RAX: 0000000000000000 RBX: dffffc0000000000 RCX: 0000000000000000 [ 1070.299428] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 [ 1070.300296] RBP: 0000000000000001 R08: 0000000000000000 R09: ffffffff873eda27 [ 1070.301204] R10: ffffc900083679a0 R11: 0000000000000001 R12: ffff88812624c000 [ 1070.301932] R13: dffffc0000000000 R14: ffff88810e6e9a88 R15: ffff888119bb9000 [ 1070.302645] FS: 0000000000000000(0000) GS:ffff888151200000(0000) knlGS:0000000000000000 [ 1070.303462] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1070.304131] CR2: 0000561a4d922418 CR3: 000000010aecc000 CR4: 0000000000350ee0 [ 1070.305004] Kernel panic - not syncing: Fatal exception [ 1070.305711] Kernel Offset: disabled [ 1070.305971] ---[ end Kernel panic - not syncing: Fatal exception ]--- Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Cc: stable@vger.kernel.org Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>