summaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)Author
2023-06-28Merge tag 'mm-nonmm-stable-2023-06-24-19-23' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull non-mm updates from Andrew Morton: - Arnd Bergmann has fixed a bunch of -Wmissing-prototypes in top-level directories - Douglas Anderson has added a new "buddy" mode to the hardlockup detector. It permits the detector to work on architectures which cannot provide the required interrupts, by having CPUs periodically perform checks on other CPUs - Zhen Lei has enhanced kexec's ability to support two crash regions - Petr Mladek has done a lot of cleanup on the hard lockup detector's Kconfig entries - And the usual bunch of singleton patches in various places * tag 'mm-nonmm-stable-2023-06-24-19-23' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (72 commits) kernel/time/posix-stubs.c: remove duplicated include ocfs2: remove redundant assignment to variable bit_off watchdog/hardlockup: fix typo in config HARDLOCKUP_DETECTOR_PREFER_BUDDY powerpc: move arch_trigger_cpumask_backtrace from nmi.h to irq.h devres: show which resource was invalid in __devm_ioremap_resource() watchdog/hardlockup: define HARDLOCKUP_DETECTOR_ARCH watchdog/sparc64: define HARDLOCKUP_DETECTOR_SPARC64 watchdog/hardlockup: make HAVE_NMI_WATCHDOG sparc64-specific watchdog/hardlockup: declare arch_touch_nmi_watchdog() only in linux/nmi.h watchdog/hardlockup: make the config checks more straightforward watchdog/hardlockup: sort hardlockup detector related config values a logical way watchdog/hardlockup: move SMP barriers from common code to buddy code watchdog/buddy: simplify the dependency for HARDLOCKUP_DETECTOR_PREFER_BUDDY watchdog/buddy: don't copy the cpumask in watchdog_next_cpu() watchdog/buddy: cleanup how watchdog_buddy_check_hardlockup() is called watchdog/hardlockup: remove softlockup comment in touch_nmi_watchdog() watchdog/hardlockup: in watchdog_hardlockup_check() use cpumask_copy() watchdog/hardlockup: don't use raw_cpu_ptr() in watchdog_hardlockup_kick() watchdog/hardlockup: HAVE_NMI_WATCHDOG must implement watchdog_hardlockup_probe() watchdog/hardlockup: keep kernel.nmi_watchdog sysctl as 0444 if probe fails ...
2023-06-28Merge tag 'mm-stable-2023-06-24-19-15' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull mm updates from Andrew Morton: - Yosry Ahmed brought back some cgroup v1 stats in OOM logs - Yosry has also eliminated cgroup's atomic rstat flushing - Nhat Pham adds the new cachestat() syscall. It provides userspace with the ability to query pagecache status - a similar concept to mincore() but more powerful and with improved usability - Mel Gorman provides more optimizations for compaction, reducing the prevalence of page rescanning - Lorenzo Stoakes has done some maintanance work on the get_user_pages() interface - Liam Howlett continues with cleanups and maintenance work to the maple tree code. Peng Zhang also does some work on maple tree - Johannes Weiner has done some cleanup work on the compaction code - David Hildenbrand has contributed additional selftests for get_user_pages() - Thomas Gleixner has contributed some maintenance and optimization work for the vmalloc code - Baolin Wang has provided some compaction cleanups, - SeongJae Park continues maintenance work on the DAMON code - Huang Ying has done some maintenance on the swap code's usage of device refcounting - Christoph Hellwig has some cleanups for the filemap/directio code - Ryan Roberts provides two patch series which yield some rationalization of the kernel's access to pte entries - use the provided APIs rather than open-coding accesses - Lorenzo Stoakes has some fixes to the interaction between pagecache and directio access to file mappings - John Hubbard has a series of fixes to the MM selftesting code - ZhangPeng continues the folio conversion campaign - Hugh Dickins has been working on the pagetable handling code, mainly with a view to reducing the load on the mmap_lock - Catalin Marinas has reduced the arm64 kmalloc() minimum alignment from 128 to 8 - Domenico Cerasuolo has improved the zswap reclaim mechanism by reorganizing the LRU management - Matthew Wilcox provides some fixups to make gfs2 work better with the buffer_head code - Vishal Moola also has done some folio conversion work - Matthew Wilcox has removed the remnants of the pagevec code - their functionality is migrated over to struct folio_batch * tag 'mm-stable-2023-06-24-19-15' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (380 commits) mm/hugetlb: remove hugetlb_set_page_subpool() mm: nommu: correct the range of mmap_sem_read_lock in task_mem() hugetlb: revert use of page_cache_next_miss() Revert "page cache: fix page_cache_next/prev_miss off by one" mm/vmscan: fix root proactive reclaim unthrottling unbalanced node mm: memcg: rename and document global_reclaim() mm: kill [add|del]_page_to_lru_list() mm: compaction: convert to use a folio in isolate_migratepages_block() mm: zswap: fix double invalidate with exclusive loads mm: remove unnecessary pagevec includes mm: remove references to pagevec mm: rename invalidate_mapping_pagevec to mapping_try_invalidate mm: remove struct pagevec net: convert sunrpc from pagevec to folio_batch i915: convert i915_gpu_error to use a folio_batch pagevec: rename fbatch_count() mm: remove check_move_unevictable_pages() drm: convert drm_gem_put_pages() to use a folio_batch i915: convert shmem_sg_free_table() to use a folio_batch scatterlist: add sg_set_folio() ...
2023-06-27Merge tag 'wq-for-6.5-cleanup-ordered' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq Pull ordered workqueue creation updates from Tejun Heo: "For historical reasons, unbound workqueues with max concurrency limit of 1 are considered ordered, even though the concurrency limit hasn't been system-wide for a long time. This creates ambiguity around whether ordered execution is actually required for correctness, which was actually confusing for e.g. btrfs (btrfs updates are being routed through the btrfs tree). There aren't that many users in the tree which use the combination and there are pending improvements to unbound workqueue affinity handling which will make inadvertent use of ordered workqueue a bigger loss. This clarifies the situation for most of them by updating the ones which require ordered execution to use alloc_ordered_workqueue(). There are some conversions being routed through subsystem-specific trees and likely a few stragglers. Once they're all converted, workqueue can trigger a warning on unbound + @max_active==1 usages and eventually drop the implicit ordered behavior" * tag 'wq-for-6.5-cleanup-ordered' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: rxrpc: Use alloc_ordered_workqueue() to create ordered workqueues net: qrtr: Use alloc_ordered_workqueue() to create ordered workqueues net: wwan: t7xx: Use alloc_ordered_workqueue() to create ordered workqueues dm integrity: Use alloc_ordered_workqueue() to create ordered workqueues media: amphion: Use alloc_ordered_workqueue() to create ordered workqueues scsi: NCR5380: Use default @max_active for hostdata->work_q media: coda: Use alloc_ordered_workqueue() to create ordered workqueues crypto: octeontx2: Use alloc_ordered_workqueue() to create ordered workqueues wifi: ath10/11/12k: Use alloc_ordered_workqueue() to create ordered workqueues wifi: mwifiex: Use default @max_active for workqueues wifi: iwlwifi: Use default @max_active for trans_pcie->rba.alloc_wq xen/pvcalls: Use alloc_ordered_workqueue() to create ordered workqueues virt: acrn: Use alloc_ordered_workqueue() to create ordered workqueues net: octeontx2: Use alloc_ordered_workqueue() to create ordered workqueues net: thunderx: Use alloc_ordered_workqueue() to create ordered workqueues greybus: Use alloc_ordered_workqueue() to create ordered workqueues powerpc, workqueue: Use alloc_ordered_workqueue() to create ordered workqueues
2023-06-27Merge tag 'objtool-core-2023-06-27' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull objtool updates from Ingo Molar: "Build footprint & performance improvements: - Reduce memory usage with CONFIG_DEBUG_INFO=y In the worst case of an allyesconfig+CONFIG_DEBUG_INFO=y kernel, DWARF creates almost 200 million relocations, ballooning objtool's peak heap usage to 53GB. These patches reduce that to 25GB. On a distro-type kernel with kernel IBT enabled, they reduce objtool's peak heap usage from 4.2GB to 2.8GB. These changes also improve the runtime significantly. Debuggability improvements: - Add the unwind_debug command-line option, for more extend unwinding debugging output - Limit unreachable warnings to once per function - Add verbose option for disassembling affected functions - Include backtrace in verbose mode - Detect missing __noreturn annotations - Ignore exc_double_fault() __noreturn warnings - Remove superfluous global_noreturns entries - Move noreturn function list to separate file - Add __kunit_abort() to noreturns Unwinder improvements: - Allow stack operations in UNWIND_HINT_UNDEFINED regions - drm/vmwgfx: Add unwind hints around RBP clobber Cleanups: - Move the x86 entry thunk restore code into thunk functions - x86/unwind/orc: Use swap() instead of open coding it - Remove unnecessary/unused variables Fixes for modern stack canary handling" * tag 'objtool-core-2023-06-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (42 commits) x86/orc: Make the is_callthunk() definition depend on CONFIG_BPF_JIT=y objtool: Skip reading DWARF section data objtool: Free insns when done objtool: Get rid of reloc->rel[a] objtool: Shrink elf hash nodes objtool: Shrink reloc->sym_reloc_entry objtool: Get rid of reloc->jump_table_start objtool: Get rid of reloc->addend objtool: Get rid of reloc->type objtool: Get rid of reloc->offset objtool: Get rid of reloc->idx objtool: Get rid of reloc->list objtool: Allocate relocs in advance for new rela sections objtool: Add for_each_reloc() objtool: Don't free memory in elf_close() objtool: Keep GElf_Rel[a] structs synced objtool: Add elf_create_section_pair() objtool: Add mark_sec_changed() objtool: Fix reloc_hash size objtool: Consolidate rel/rela handling ...
2023-06-27Merge tag 'locking-core-2023-06-27' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Ingo Molnar: - Introduce cmpxchg128() -- aka. the demise of cmpxchg_double() The cmpxchg128() family of functions is basically & functionally the same as cmpxchg_double(), but with a saner interface. Instead of a 6-parameter horror that forced u128 - u64/u64-halves layout details on the interface and exposed users to complexity, fragility & bugs, use a natural 3-parameter interface with u128 types. - Restructure the generated atomic headers, and add kerneldoc comments for all of the generic atomic{,64,_long}_t operations. The generated definitions are much cleaner now, and come with documentation. - Implement lock_set_cmp_fn() on lockdep, for defining an ordering when taking multiple locks of the same type. This gets rid of one use of lockdep_set_novalidate_class() in the bcache code. - Fix raw_cpu_generic_try_cmpxchg() bug due to an unintended variable shadowing generating garbage code on Clang on certain ARM builds. * tag 'locking-core-2023-06-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (43 commits) locking/atomic: scripts: fix ${atomic}_dec_if_positive() kerneldoc percpu: Fix self-assignment of __old in raw_cpu_generic_try_cmpxchg() locking/atomic: treewide: delete arch_atomic_*() kerneldoc locking/atomic: docs: Add atomic operations to the driver basic API documentation locking/atomic: scripts: generate kerneldoc comments docs: scripts: kernel-doc: accept bitwise negation like ~@var locking/atomic: scripts: simplify raw_atomic*() definitions locking/atomic: scripts: simplify raw_atomic_long*() definitions locking/atomic: scripts: split pfx/name/sfx/order locking/atomic: scripts: restructure fallback ifdeffery locking/atomic: scripts: build raw_atomic_long*() directly locking/atomic: treewide: use raw_atomic*_<op>() locking/atomic: scripts: add trivial raw_atomic*_<op>() locking/atomic: scripts: factor out order template generation locking/atomic: scripts: remove leftover "${mult}" locking/atomic: scripts: remove bogus order parameter locking/atomic: xtensa: add preprocessor symbols locking/atomic: x86: add preprocessor symbols locking/atomic: sparc: add preprocessor symbols locking/atomic: sh: add preprocessor symbols ...
2023-06-27powerpc: remove checks for binutils older than 2.25Masahiro Yamada
Commit e4412739472b ("Documentation: raise minimum supported version of binutils to 2.25") allows us to remove the checks for old binutils. There is no more user for ld-ifversion. Remove it as well. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230119082250.151485-1-masahiroy@kernel.org
2023-06-27powerpc: Fail build if using recordmcount with binutils v2.37Naveen N Rao
binutils v2.37 drops unused section symbols, which prevents recordmcount from capturing mcount locations in sections that have no non-weak symbols. This results in a build failure with a message such as: Cannot find symbol for section 12: .text.perf_callchain_kernel. kernel/events/callchain.o: failed The change to binutils was reverted for v2.38, so this behavior is specific to binutils v2.37: https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=c09c8b42021180eee9495bd50d8b35e683d3901b Objtool is able to cope with such sections, so this issue is specific to recordmcount. Fail the build and print a warning if binutils v2.37 is detected and if we are using recordmcount. Cc: stable@vger.kernel.org Suggested-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230530061436.56925-1-naveen@kernel.org
2023-06-26Merge tag 'x86-boot-2023-06-26' of ↵Linus Torvalds
ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 boot updates from Thomas Gleixner: "Initialize FPU late. Right now FPU is initialized very early during boot. There is no real requirement to do so. The only requirement is to have it done before alternatives are patched. That's done in check_bugs() which does way more than what the function name suggests. So first rename check_bugs() to arch_cpu_finalize_init() which makes it clear what this is about. Move the invocation of arch_cpu_finalize_init() earlier in start_kernel() as it has to be done before fork_init() which needs to know the FPU register buffer size. With those prerequisites the FPU initialization can be moved into arch_cpu_finalize_init(), which removes it from the early and fragile part of the x86 bringup" * tag 'x86-boot-2023-06-26' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mem_encrypt: Unbreak the AMD_MEM_ENCRYPT=n build x86/fpu: Move FPU initialization into arch_cpu_finalize_init() x86/fpu: Mark init functions __init x86/fpu: Remove cpuinfo argument from init functions x86/init: Initialize signal frame size late init, x86: Move mem_encrypt_init() into arch_cpu_finalize_init() init: Invoke arch_cpu_finalize_init() earlier init: Remove check_bugs() leftovers um/cpu: Switch to arch_cpu_finalize_init() sparc/cpu: Switch to arch_cpu_finalize_init() sh/cpu: Switch to arch_cpu_finalize_init() mips/cpu: Switch to arch_cpu_finalize_init() m68k/cpu: Switch to arch_cpu_finalize_init() loongarch/cpu: Switch to arch_cpu_finalize_init() ia64/cpu: Switch to arch_cpu_finalize_init() ARM: cpu: Switch to arch_cpu_finalize_init() x86/cpu: Switch to arch_cpu_finalize_init() init: Provide arch_cpu_finalize_init()
2023-06-26Merge tag 'for-6.5/block-2023-06-23' of git://git.kernel.dk/linuxLinus Torvalds
Pull block updates from Jens Axboe: - NVMe pull request via Keith: - Various cleanups all around (Irvin, Chaitanya, Christophe) - Better struct packing (Christophe JAILLET) - Reduce controller error logs for optional commands (Keith) - Support for >=64KiB block sizes (Daniel Gomez) - Fabrics fixes and code organization (Max, Chaitanya, Daniel Wagner) - bcache updates via Coly: - Fix a race at init time (Mingzhe Zou) - Misc fixes and cleanups (Andrea, Thomas, Zheng, Ye) - use page pinning in the block layer for dio (David) - convert old block dio code to page pinning (David, Christoph) - cleanups for pktcdvd (Andy) - cleanups for rnbd (Guoqing) - use the unchecked __bio_add_page() for the initial single page additions (Johannes) - fix overflows in the Amiga partition handling code (Michael) - improve mq-deadline zoned device support (Bart) - keep passthrough requests out of the IO schedulers (Christoph, Ming) - improve support for flush requests, making them less special to deal with (Christoph) - add bdev holder ops and shutdown methods (Christoph) - fix the name_to_dev_t() situation and use cases (Christoph) - decouple the block open flags from fmode_t (Christoph) - ublk updates and cleanups, including adding user copy support (Ming) - BFQ sanity checking (Bart) - convert brd from radix to xarray (Pankaj) - constify various structures (Thomas, Ivan) - more fine grained persistent reservation ioctl capability checks (Jingbo) - misc fixes and cleanups (Arnd, Azeem, Demi, Ed, Hengqi, Hou, Jan, Jordy, Li, Min, Yu, Zhong, Waiman) * tag 'for-6.5/block-2023-06-23' of git://git.kernel.dk/linux: (266 commits) scsi/sg: don't grab scsi host module reference ext4: Fix warning in blkdev_put() block: don't return -EINVAL for not found names in devt_from_devname cdrom: Fix spectre-v1 gadget block: Improve kernel-doc headers blk-mq: don't insert passthrough request into sw queue bsg: make bsg_class a static const structure ublk: make ublk_chr_class a static const structure aoe: make aoe_class a static const structure block/rnbd: make all 'class' structures const block: fix the exclusive open mask in disk_scan_partitions block: add overflow checks for Amiga partition support block: change all __u32 annotations to __be32 in affs_hardblocks.h block: fix signed int overflow in Amiga partition support block: add capacity validation in bdev_add_partition() block: fine-granular CAP_SYS_ADMIN for Persistent Reservation block: disallow Persistent Reservation on partitions reiserfs: fix blkdev_put() warning from release_journal_dev() block: fix wrong mode for blkdev_get_by_dev() from disk_scan_partitions() block: document the holder argument to blkdev_get_by_path ...
2023-06-26Merge tag 'v6.5/vfs.misc' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs Pull misc vfs updates from Christian Brauner: "Miscellaneous features, cleanups, and fixes for vfs and individual fs Features: - Use mode 0600 for file created by cachefilesd so it can be run by unprivileged users. This aligns them with directories which are already created with mode 0700 by cachefilesd - Reorder a few members in struct file to prevent some false sharing scenarios - Indicate that an eventfd is used a semaphore in the eventfd's fdinfo procfs file - Add a missing uapi header for eventfd exposing relevant uapi defines - Let the VFS protect transitions of a superblock from read-only to read-write in addition to the protection it already provides for transitions from read-write to read-only. Protecting read-only to read-write transitions allows filesystems such as ext4 to perform internal writes, keeping writers away until the transition is completed Cleanups: - Arnd removed the architecture specific arch_report_meminfo() prototypes and added a generic one into procfs.h. Note, we got a report about a warning in amdpgpu codepaths that suggested this was bisectable to this change but we concluded it was a false positive - Remove unused parameters from split_fs_names() - Rename put_and_unmap_page() to unmap_and_put_page() to let the name reflect the order of the cleanup operation that has to unmap before the actual put - Unexport buffer_check_dirty_writeback() as it is not used outside of block device aops - Stop allocating aio rings from highmem - Protecting read-{only,write} transitions in the VFS used open-coded barriers in various places. Replace them with proper little helpers and document both the helpers and all barrier interactions involved when transitioning between read-{only,write} states - Use flexible array members in old readdir codepaths Fixes: - Use the correct type __poll_t for epoll and eventfd - Replace all deprecated strlcpy() invocations, whose return value isn't checked with an equivalent strscpy() call - Fix some kernel-doc warnings in fs/open.c - Reduce the stack usage in jffs2's xattr codepaths finally getting rid of this: fs/jffs2/xattr.c:887:1: error: the frame size of 1088 bytes is larger than 1024 bytes [-Werror=frame-larger-than=] royally annoying compilation warning - Use __FMODE_NONOTIFY instead of FMODE_NONOTIFY where an int and not fmode_t is required to avoid fmode_t to integer degradation warnings - Create coredumps with O_WRONLY instead of O_RDWR. There's a long explanation in that commit how O_RDWR is actually a bug which we found out with the help of Linus and git archeology - Fix "no previous prototype" warnings in the pipe codepaths - Add overflow calculations for remap_verify_area() as a signed addition overflow could be triggered in xfstests - Fix a null pointer dereference in sysv - Use an unsigned variable for length calculations in jfs avoiding compilation warnings with gcc 13 - Fix a dangling pipe pointer in the watch queue codepath - The legacy mount option parser provided as a fallback by the VFS for filesystems not yet converted to the new mount api did prefix the generated mount option string with a leading ',' causing issues for some filesystems - Fix a repeated word in a comment in fs.h - autofs: Update the ctime when mtime is updated as mandated by POSIX" * tag 'v6.5/vfs.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: (27 commits) readdir: Replace one-element arrays with flexible-array members fs: Provide helpers for manipulating sb->s_readonly_remount fs: Protect reconfiguration of sb read-write from racing writes eventfd: add a uapi header for eventfd userspace APIs autofs: set ctime as well when mtime changes on a dir eventfd: show the EFD_SEMAPHORE flag in fdinfo fs/aio: Stop allocating aio rings from HIGHMEM fs: Fix comment typo fs: unexport buffer_check_dirty_writeback fs: avoid empty option when generating legacy mount string watch_queue: prevent dangling pipe pointer fs.h: Optimize file struct to prevent false sharing highmem: Rename put_and_unmap_page() to unmap_and_put_page() cachefiles: Allow the cache to be non-root init: remove unused names parameter in split_fs_names() jfs: Use unsigned variable for length calculations fs/sysv: Null check to prevent null-ptr-deref bug fs: use UB-safe check for signed addition overflow in remap_verify_area procfs: consolidate arch_report_meminfo declaration fs: pipe: reveal missing function protoypes ...
2023-06-26powerpc/iommu: TCEs are incorrectly manipulated with DLPAR add/remove of memoryGaurav Batra
When memory is dynamically added/removed, iommu_mem_notifier() is invoked. This routine traverses through all the DMA windows (DDW only, not default windows) to add/remove "direct" TCE mappings. The routines for this purpose are tce_clearrange_multi_pSeriesLP() and tce_clearrange_multi_pSeriesLP(). Both these routines are designed for Direct mapped DMA windows only. The issue is that there could be some DMA windows in the list which are not "direct" mapped. Calling these routines will either, 1) remove some dynamically mapped TCEs, Or 2) try to add TCEs which are out of bounds and HCALL returns H_PARAMETER Here are the side affects when these routines are incorrectly invoked for "dynamically" mapped DMA windows. tce_setrange_multi_pSeriesLP() This adds direct mapped TCEs. Now, this could invoke HCALL to add TCEs with out-of-bound range. In this scenario, HCALL will return H_PARAMETER and DLAR ADD of memory will fail. tce_clearrange_multi_pSeriesLP() This will remove range of TCEs. The TCE range that is calculated, depending on the memory range being added, could infact be mapping some other memory address (for dynamic DMA window scenario). This will wipe out those TCEs. The solution is for iommu_mem_notifier() to only invoke these routines for "direct" mapped DMA windows. Signed-off-by: Gaurav Batra <gbatra@linux.vnet.ibm.com> Reviewed-by: Brian King <brking@linux.vnet.ibm.com> [mpe: Initialise direct at allocation time in ddw_list_new_entry()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230613171641.15641-1-gbatra@linux.vnet.ibm.com
2023-06-24powerpc/mm: convert coprocessor fault to lock_mm_and_find_vma()Linus Torvalds
This is one of the simple cases, except there's no pt_regs pointer. Which is fine, as lock_mm_and_find_vma() is set up to work fine with a NULL pt_regs. Powerpc already enabled LOCK_MM_AND_FIND_VMA for the main CPU faulting, so we can just use the helper without any extra work. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-06-24powerpc/mm: Convert to using lock_mm_and_find_vma()Michael Ellerman
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-06-23powerpc: move arch_trigger_cpumask_backtrace from nmi.h to irq.hDouglas Anderson
The powerpc architecture was the only one that defined arch_trigger_cpumask_backtrace() in asm/nmi.h instead of asm/irq.h. Move it to be consistent. This fixes compile time errors introduced by commit 7ca8fe94aa92 ("watchdog/hardlockup: define HARDLOCKUP_DETECTOR_ARCH"). That commit caused <asm/nmi.h> to stop being included if the hardlockup detector wasn't enabled. The specific errors were: error: implicit declaration of function `nmi_cpu_backtrace' error: implicit declaration of function `nmi_trigger_cpumask_backtrace' NOTE: when moving this into irq.h, we also change the guards from just checking if "CONFIG_NMI_IPI" is defined to also checking if "CONFIG_PPC_BOOK3S_64" is defined. This matches the code in arch/powerpc/kernel/stacktrace.c. Previously this worked because <asm.nmi.h> was included if "CONFIG_HAVE_HARDLOCKUP_DETECTOR_ARCH" was defined. For powerpc that's only selected if "CONFIG_PPC_BOOK3S_64" is defined. [dianders@chromium.org: change the guards to include CONFIG_PPC_BOOK3S_64] Link: https://lkml.kernel.org/r/20230622202816.v2.1.Ice67126857506712559078e7de26d32d26e64631@changeid Link: https://lkml.kernel.org/r/20230621164809.1.Ice67126857506712559078e7de26d32d26e64631@changeid Fixes: 7ca8fe94aa92 ("watchdog/hardlockup: define HARDLOCKUP_DETECTOR_ARCH") Signed-off-by: Douglas Anderson <dianders@chromium.org> Reported-by: Michael Ellerman <mpe@ellerman.id.au> Closes: https://lore.kernel.org/r/871qi5otdh.fsf@mail.lhotse Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Douglas Anderson <dianders@chromium.org> Cc: Laurent Dufour <ldufour@linux.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Tom Rix <trix@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-23Merge mm-hotfixes-stable into mm-stable to pick up depended-upon changes.Andrew Morton
2023-06-22Merge tag 'powerpc-6.4-5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fix from Michael Ellerman: - Disable IRQs when switching mm in exit_lazy_flush_tlb() called from exit_mmap() Thanks to Nicholas Piggin and Sachin Sant. * tag 'powerpc-6.4-5' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/64s/radix: Fix exit lazy tlb mm switch with irqs enabled
2023-06-21powerpc/iommu: Only build sPAPR access functions on pSeriesTimothy Pearson
and PowerNV A build failure with CONFIG_HAVE_PCI=y set without PSERIES or POWERNV set was caught by the random configuration checker. Guard the sPAPR specific IOMMU functions on CONFIG_PPC_PSERIES || CONFIG_PPC_POWERNV. Signed-off-by: Timothy Pearson <tpearson@raptorengineering.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/2015925968.3546872.1685990936823.JavaMail.zimbra@raptorengineeringinc.com
2023-06-21powerpc: powernv: Annotate data races in opal eventsRohan McLure
The kopald thread handles opal events as they appear, but by polling a static bit-vector in last_outstanding_events. Annotate these data races accordingly. We are not at risk of missing events, but use of READ_ONCE, WRITE_ONCE will assist readers in seeing that kopald only consumes the events it is aware of when it is scheduled. Also removes extraneous KCSAN warnings. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230510033117.1395895-10-rmclure@linux.ibm.com
2023-06-21powerpc: Mark writes registering ipi to host cpu through kvm and pollingRohan McLure
Mark writes to hypervisor ipi state so that KCSAN recognises these asynchronous issue of kvmppc_{set,clear}_host_ipi to be intended, with atomic writes. Mark asynchronous polls to this variable in kvm_ppc_read_one_intr(). Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230510033117.1395895-9-rmclure@linux.ibm.com
2023-06-21powerpc: Annotate accesses to ipi message flagsRohan McLure
IPI message flags are observed and consequently consumed in the smp_ipi_demux_relaxed function, which handles these message sources until it observes none more arriving. Mark the checked loop guard with READ_ONCE, to signal to KCSAN that the read is known to be volatile, and that non-determinism is expected. Mark write for message source in smp_muxed_ipi_set_message(). Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230510033117.1395895-8-rmclure@linux.ibm.com
2023-06-21powerpc: powernv: Fix KCSAN datarace warnings on idle_state contentionRohan McLure
The idle_state entry in the PACA on PowerNV features a bit which is atomically tested and set through ldarx/stdcx. to be used as a spinlock. This lock then guards access to other bit fields of idle_state. KCSAN cannot differentiate between any of these bitfield accesses as they all are implemented by 8-byte store/load instructions, thus cores contending on the bit-lock appear to data race with modifications to idle_state. Separate the bit-lock entry from the data guarded by the lock to avoid the possibility of data races being detected by KCSAN. Suggested-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230510033117.1395895-7-rmclure@linux.ibm.com
2023-06-21powerpc: Mark [h]ssr_valid accesses in check_return_regs_validRohan McLure
Checks to see if the [H]SRR registers have been clobbered by (soft) NMI interrupts imply the possibility for a data race on the [h]srr_valid entries in the PACA. Annotate accesses to these fields with READ_ONCE, removing the need for the barrier. The diagnostic can use plain-access reads and writes, but annotate with data_race. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reported-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230510033117.1395895-5-rmclure@linux.ibm.com
2023-06-21powerpc: qspinlock: Enforce qnode writes prior to publishing to queueRohan McLure
Annotate the release barrier and memory clobber (in effect, producing a compiler barrier) in the publish_tail_cpu call. These barriers have the effect of ensuring that qnode attributes are all written to prior to publish the node to the waitqueue. Even while the initial write to the 'locked' attribute is guaranteed to terminate prior to the node being visible, KCSAN still complains that the write is reorderable by the compiler. Issue a kcsan_release() to inform KCSAN of the release barrier contained in publish_tail_cpu(). Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230510033117.1395895-3-rmclure@linux.ibm.com
2023-06-21powerpc: qspinlock: Mark accesses to qnode lock checksRohan McLure
The powerpc implementation of qspinlocks will both poll and spin on the bitlock guarding a qnode. Mark these accesses with READ_ONCE to convey to KCSAN that polling is intentional here. Signed-off-by: Rohan McLure <rmclure@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230510033117.1395895-2-rmclure@linux.ibm.com
2023-06-21powerpc/powernv/pci: Remove last IODA1 definesJoel Stanley
Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230613045202.294451-4-joel@jms.id.au
2023-06-21powerpc/powernv/pci: Remove MVE codeJoel Stanley
With IODA1 support gone the OPAL calls to set MVE are dead code. Remove them. Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230613045202.294451-3-joel@jms.id.au
2023-06-21powerpc/powernv/pci: Remove ioda1 supportJoel Stanley
The final "VPL" Power7 boxes that were used for powernv bringup have been scrapped, meaning there are no machines with ioda1 left. This patch removes the obvious unused code. Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230613045202.294451-2-joel@jms.id.au
2023-06-21powerpc: 52xx: Make immr_id DT match tables staticRob Herring
In some builds, the mpc52xx_pm_prepare()/lite5200_pm_prepare() functions generate stack size warnings. The addition of 'struct resource' in commit 2500763dd3db ("powerpc: Use of_address_to_resource()") grew the stack size and is blamed for the warnings. However, the real issue is there's no reason the 'struct of_device_id immr_ids' DT match tables need to be on the stack as they are constant. Declare them as static to move them off the stack. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202306130405.uTv5yOZD-lkp@intel.com/ Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230614171724.2403982-1-robh@kernel.org
2023-06-21powerpc: mpc512x: Remove open coded "ranges" parsingRob Herring
"ranges" is a standard property, and we have common helper functions for parsing it, so let's use the for_each_of_range() iterator. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230609183232.1767050-1-robh@kernel.org
2023-06-21powerpc: fsl_soc: Use of_range_to_resource() for "ranges" parsingRob Herring
"ranges" is a standard property with common parsing functions. Users shouldn't be implementing their own parsing of it. Refactor the FSL RapidIO "ranges" parsing to use of_range_to_resource() instead. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230609183238.1767186-1-robh@kernel.org
2023-06-21powerpc: fsl: Use of_property_read_reg() to parse "reg"Rob Herring
Use the recently added of_property_read_reg() helper to get the untranslated "reg" address value. Signed-off-by: Rob Herring <robh@kernel.org> [mpe: Add required include of of_address.h] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230609183151.1766261-1-robh@kernel.org
2023-06-21powerpc: fsl_rio: Use of_range_to_resource() for "ranges" parsingRob Herring
"ranges" is a standard property with common parsing functions. Users shouldn't be implementing their own parsing of it. Refactor the FSL RapidIO "ranges" parsing to use of_range_to_resource() instead. One change is the original code would look for "#size-cells" and "#address-cells" in the parent node if not found in the port child nodes. That is non-standard behavior and not necessary AFAICT. In 2011 in commit 54986964c13c ("powerpc/85xx: Update SRIO device tree nodes") there was an ABI break. The upstream .dts files have been correct since at least that point. Signed-off-by: Rob Herring <robh@kernel.org> [mpe: Remove now unused "cell" variable] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230609183244.1767325-1-robh@kernel.org "ranges" is a standard property with common parsing functions. Users shouldn't be implementing their own parsing of it. Refactor the FSL RapidIO "ranges" parsing to use of_range_to_resource() instead. One change is the original code would look for "#size-cells" and "#address-cells" in the parent node if not found in the port child nodes. That is non-standard behavior and not necessary AFAICT. In 2011 in commit 54986964c13c ("powerpc/85xx: Update SRIO device tree nodes") there was an ABI break. The upstream .dts files have been correct since at least that point. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230609183244.1767325-1-robh@kernel.org
2023-06-21powerpc: powermac: Use of_get_cpu_hwid() to read CPU node 'reg'Rob Herring
Replace open coded reading of CPU nodes' "reg" properties with of_get_cpu_hwid() dedicated for this purpose. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230319145931.65499-1-robh@kernel.org
2023-06-21powerpc: drop MPC85xx_CDS platform supportPaul Gortmaker
The MPC8541/8548/8555 Configurable Development System (CDS) were the vehicle used to provide evaluation of the 1st e500-v2 CPUs around 2007. Similar to the earlier MPC83xx-MDS systems we removed, the "brains" exist on a PCI-X card, but additional connectors exist to the right of the PCI-X slot, two structural metal pins are used to provide stability in a vertical ATX mounting, and the CPU is now on a daughter-card vs. a clamped down BGA. Given the extra complexity and risk of connector damage, the 8548CDS I had access to came pre-assembled in a basic white Antec case common for that era, and I'm inclined to assume that was the default. Power was typical "Pentium4" 2005 ATX - the main 20 pin connector went to the PCI ATX form factor backplane, and the 4 pin black/yellow went to the CPU card. Like previous evaluation boards, they attempted to provide break-out connectors for as many features as possible, and that made for a fairly complex looking system. In any case, these are over 15 years old, and fairly complex systems, originally made for a small group of industry related people, and made for use where quiet fan operation wasn't important. Given that, it makes sense to remove support from them in 2023. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230620043300.197546-3-paul.gortmaker@windriver.com
2023-06-21powerpc: drop MPC8540_ADS and MPC8560_ADS platform supportPaul Gortmaker
Based on the revision history in the manual(s), these e500-v1 platforms were first available around 2002. Like a lot of evaluation boards, they attempted to provide break-out connectors for all possible features, and that combined with four PCI-X slots (and the age/era) meant for a considerably large board. As I recall it, from a Linux point of view, the biggest difference between 8540 and 8560 was in the UART implementation, and that is reflected in a diff of the defconfigs. In any case, these are over 20 years old, and by today's standards only have a small amount of DDR1 memory, and were not widely available. Given that, it makes sense to remove support from them in 2023. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230620043300.197546-2-paul.gortmaker@windriver.com
2023-06-21powerpc/mm/dax: Fix the condition when checking if altmap vmemap can ↵Aneesh Kumar K.V
cross-boundary Without this fix, the last subsection vmemmap can end up in memory even if the namespace is created with -M mem and has sufficient space in the altmap area. Fixes: cf387d9644d8 ("libnvdimm/altmap: Track namespace boundaries in altmap") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Tested-by: Sachin Sant <sachinp@linux.ibm.com <mailto:sachinp@linux.ibm.com>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230616110826.344417-6-aneesh.kumar@linux.ibm.com
2023-06-21powerpc/book3s64/mm: Use PAGE_KERNEL instead of opencodingAneesh Kumar K.V
No functional change in this patch. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Tested-by: Sachin Sant <sachinp@linux.ibm.com <mailto:sachinp@linux.ibm.com>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230616110826.344417-5-aneesh.kumar@linux.ibm.com
2023-06-21powerpc/book3s64/mm: Fix DirectMap stats in /proc/meminfoAneesh Kumar K.V
On memory unplug reduce DirectMap page count correctly. root@ubuntu-guest:# grep Direct /proc/meminfo DirectMap4k: 0 kB DirectMap64k: 0 kB DirectMap2M: 115343360 kB DirectMap1G: 0 kB Before fix: root@ubuntu-guest:# ndctl disable-namespace all disabled 1 namespace root@ubuntu-guest:# grep Direct /proc/meminfo DirectMap4k: 0 kB DirectMap64k: 0 kB DirectMap2M: 115343360 kB DirectMap1G: 0 kB After fix: root@ubuntu-guest:# ndctl disable-namespace all disabled 1 namespace root@ubuntu-guest:# grep Direct /proc/meminfo DirectMap4k: 0 kB DirectMap64k: 0 kB DirectMap2M: 104857600 kB DirectMap1G: 0 kB Fixes: a2dc009afa9a ("powerpc/mm/book3s/radix: Add mapping statistics") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Tested-by: Sachin Sant <sachinp@linux.ibm.com <mailto:sachinp@linux.ibm.com>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230616110826.344417-4-aneesh.kumar@linux.ibm.com
2023-06-20powerpc/mm/book3s64: Use pmdp_ptep helper instead of typecasting.Aneesh Kumar K.V
No functional change in this patch. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Tested-by: Sachin Sant <sachinp@linux.ibm.com <mailto:sachinp@linux.ibm.com>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230616110826.344417-2-aneesh.kumar@linux.ibm.com
2023-06-19watchdog/hardlockup: define HARDLOCKUP_DETECTOR_ARCHPetr Mladek
The HAVE_ prefix means that the code could be enabled. Add another variable for HAVE_HARDLOCKUP_DETECTOR_ARCH without this prefix. It will be set when it should be built. It will make it compatible with the other hardlockup detectors. The change allows to clean up dependencies of PPC_WATCHDOG and HAVE_HARDLOCKUP_DETECTOR_PERF definitions for powerpc. As a result HAVE_HARDLOCKUP_DETECTOR_PERF has the same dependencies on arm, x86, powerpc architectures. Link: https://lkml.kernel.org/r/20230616150618.6073-7-pmladek@suse.com Signed-off-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Douglas Anderson <dianders@chromium.org> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: "David S. Miller" <davem@davemloft.net> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-19watchdog/hardlockup: declare arch_touch_nmi_watchdog() only in linux/nmi.hPetr Mladek
arch_touch_nmi_watchdog() needs a different implementation for various hardlockup detector implementations. And it does nothing when any hardlockup detector is not built at all. arch_touch_nmi_watchdog() is declared via linux/nmi.h. And it must be defined as an empty function when there is no hardlockup detector. It is done directly in this header file for the perf and buddy detectors. And it is done in the included asm/linux.h for arch specific detectors. The reason probably is that the arch specific variants build the code using another conditions. For example, powerpc64/sparc64 builds the code when CONFIG_PPC_WATCHDOG is enabled. Another reason might be that these architectures define more functions in asm/nmi.h anyway. However the generic code actually knows when the function will be implemented. It happens when some full featured or the sparc64-specific hardlockup detector is built. In particular, CONFIG_HARDLOCKUP_DETECTOR can be enabled only when a generic or arch-specific full featured hardlockup detector is available. The only exception is sparc64 which can be built even when the global HARDLOCKUP_DETECTOR switch is disabled. The information about sparc64 is a bit complicated. The hardlockup detector is built there when CONFIG_HAVE_NMI_WATCHDOG is set and CONFIG_HAVE_HARDLOCKUP_DETECTOR_ARCH is not set. People might wonder whether this change really makes things easier. The motivation is: + The current logic in linux/nmi.h is far from obvious. For example, arch_touch_nmi_watchdog() is defined as {} when neither CONFIG_HARDLOCKUP_DETECTOR_COUNTS_HRTIMER nor CONFIG_HAVE_NMI_WATCHDOG is defined. + The change synchronizes the checks in lib/Kconfig.debug and in the generic code. + It is a step that will help cleaning HAVE_NMI_WATCHDOG related checks. The change should not change the existing behavior. Link: https://lkml.kernel.org/r/20230616150618.6073-4-pmladek@suse.com Signed-off-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Douglas Anderson <dianders@chromium.org> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: "David S. Miller" <davem@davemloft.net> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-19powerpc: move the ARCH_DMA_MINALIGN definition to asm/cache.hCatalin Marinas
Patch series "Move the ARCH_DMA_MINALIGN definition to asm/cache.h". The ARCH_KMALLOC_MINALIGN reduction series defines a generic ARCH_DMA_MINALIGN in linux/cache.h: https://lore.kernel.org/r/20230612153201.554742-2-catalin.marinas@arm.com/ Unfortunately, this causes a duplicate definition warning for microblaze, powerpc (32-bit only) and sh as these architectures define ARCH_DMA_MINALIGN in a different file than asm/cache.h. Move the macro to asm/cache.h to avoid this issue and also bring them in line with the other architectures. This patch (of 3): The powerpc architecture defines ARCH_DMA_MINALIGN in asm/page_32.h and only if CONFIG_NOT_COHERENT_CACHE is enabled (32-bit platforms only). Move this macro to asm/cache.h to allow a generic ARCH_DMA_MINALIGN definition in linux/cache.h without redefine errors/warnings. Link: https://lkml.kernel.org/r/20230613155245.1228274-1-catalin.marinas@arm.com Link: https://lkml.kernel.org/r/20230613155245.1228274-2-catalin.marinas@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202306131053.1ybvRRhO-lkp@intel.com/ Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Michal Simek <monstr@monstr.eu> Cc: Rich Felker <dalias@libc.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-19powerpc/hugetlb: pte_alloc_huge()Hugh Dickins
pte_alloc_map() expects to be followed by pte_unmap(), but hugetlb omits that: to keep balance in future, use the recently added pte_alloc_huge() instead. huge_pte_offset() is using __find_linux_pte(), which is using pte_offset_kernel() - don't rename that to _huge, it's more complicated. Link: https://lkml.kernel.org/r/36b4e5d-954b-8569-4fe2-bd1797362441@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> Cc: David Hildenbrand <david@redhat.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: John David Anglin <dave.anglin@bell.net> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-19powerpc: allow pte_offset_map[_lock]() to failHugh Dickins
In rare transient cases, not yet made possible, pte_offset_map() and pte_offset_map_lock() may not find a page table: handle appropriately. Balance successful pte_offset_map() with pte_unmap() where omitted. Link: https://lkml.kernel.org/r/54c8b578-ca9-a0f-bfd2-d72976f8d73a@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> Cc: David Hildenbrand <david@redhat.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: John David Anglin <dave.anglin@bell.net> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-19powerpc: kvmppc_unmap_free_pmd() pte_offset_kernel()Hugh Dickins
kvmppc_unmap_free_pmd() use pte_offset_kernel(), like everywhere else in book3s_64_mmu_radix.c: instead of pte_offset_map(), which will come to need a pte_unmap() to balance it. But note that this is a more complex case than most: see those -EAGAINs in kvmppc_create_pte(), which is coping with kvmppc races beween page table and huge entry, of the kind which we are expecting to address in pte_offset_map() - this might want to be revisited in future. Link: https://lkml.kernel.org/r/c76aa421-aec3-4cc8-cc61-4130f2e27e1@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> Cc: David Hildenbrand <david@redhat.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: John David Anglin <dave.anglin@bell.net> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-06-19Merge branches 'iommu/fixes', 'arm/smmu', 'ppc/pamu', 'virtio', 'x86/vt-d', ↵Joerg Roedel
'core' and 'x86/amd' into next
2023-06-19powerpc: update ppc_save_regs to save current r1 in pt_regsAditya Gupta
ppc_save_regs() skips one stack frame while saving the CPU register states. Instead of saving current R1, it pulls the previous stack frame pointer. When vmcores caused by direct panic call (such as `echo c > /proc/sysrq-trigger`), are debugged with gdb, gdb fails to show the backtrace correctly. On further analysis, it was found that it was because of mismatch between r1 and NIP. GDB uses NIP to get current function symbol and uses corresponding debug info of that function to unwind previous frames, but due to the mismatching r1 and NIP, the unwinding does not work, and it fails to unwind to the 2nd frame and hence does not show the backtrace. GDB backtrace with vmcore of kernel without this patch: --------- (gdb) bt #0 0xc0000000002a53e8 in crash_setup_regs (oldregs=<optimized out>, newregs=0xc000000004f8f8d8) at ./arch/powerpc/include/asm/kexec.h:69 #1 __crash_kexec (regs=<optimized out>) at kernel/kexec_core.c:974 #2 0x0000000000000063 in ?? () #3 0xc000000003579320 in ?? () --------- Further analysis revealed that the mismatch occurred because "ppc_save_regs" was saving the previous stack's SP instead of the current r1. This patch fixes this by storing current r1 in the saved pt_regs. GDB backtrace with vmcore of patched kernel: -------- (gdb) bt #0 0xc0000000002a53e8 in crash_setup_regs (oldregs=0x0, newregs=0xc00000000670b8d8) at ./arch/powerpc/include/asm/kexec.h:69 #1 __crash_kexec (regs=regs@entry=0x0) at kernel/kexec_core.c:974 #2 0xc000000000168918 in panic (fmt=fmt@entry=0xc000000001654a60 "sysrq triggered crash\n") at kernel/panic.c:358 #3 0xc000000000b735f8 in sysrq_handle_crash (key=<optimized out>) at drivers/tty/sysrq.c:155 #4 0xc000000000b742cc in __handle_sysrq (key=key@entry=99, check_mask=check_mask@entry=false) at drivers/tty/sysrq.c:602 #5 0xc000000000b7506c in write_sysrq_trigger (file=<optimized out>, buf=<optimized out>, count=2, ppos=<optimized out>) at drivers/tty/sysrq.c:1163 #6 0xc00000000069a7bc in pde_write (ppos=<optimized out>, count=<optimized out>, buf=<optimized out>, file=<optimized out>, pde=0xc00000000362cb40) at fs/proc/inode.c:340 #7 proc_reg_write (file=<optimized out>, buf=<optimized out>, count=<optimized out>, ppos=<optimized out>) at fs/proc/inode.c:352 #8 0xc0000000005b3bbc in vfs_write (file=file@entry=0xc000000006aa6b00, buf=buf@entry=0x61f498b4f60 <error: Cannot access memory at address 0x61f498b4f60>, count=count@entry=2, pos=pos@entry=0xc00000000670bda0) at fs/read_write.c:582 #9 0xc0000000005b4264 in ksys_write (fd=<optimized out>, buf=0x61f498b4f60 <error: Cannot access memory at address 0x61f498b4f60>, count=2) at fs/read_write.c:637 #10 0xc00000000002ea2c in system_call_exception (regs=0xc00000000670be80, r0=<optimized out>) at arch/powerpc/kernel/syscall.c:171 #11 0xc00000000000c270 in system_call_vectored_common () at arch/powerpc/kernel/interrupt_64.S:192 -------- Nick adds: So this now saves regs as though it was an interrupt taken in the caller, at the instruction after the call to ppc_save_regs, whereas previously the NIP was there, but R1 came from the caller's caller and that mismatch is what causes gdb's dwarf unwinder to go haywire. Signed-off-by: Aditya Gupta <adityag@linux.ibm.com> Fixes: d16a58f8854b1 ("powerpc: Improve ppc_save_regs()") Reivewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230615091047.90433-1-adityag@linux.ibm.com
2023-06-19powerpc/ftrace: Disable ftrace on ppc32 if using clangNaveen N Rao
Ftrace on ppc32 expects a three instruction sequence at the beginning of each function when specifying -pg: mflr r0 stw r0,4(r1) bl _mcount This is the case with all supported versions of gcc. Clang however emits a branch to _mcount after the function prologue, similar to the pre -mprofile-kernel ABI on ppc64. This is not supported. Disable ftrace on ppc32 if using clang for now. This can be re-enabled later if clang picks up support for -fpatchable-function-entry on ppc32. Signed-off-by: Naveen N Rao <naveen@kernel.org> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://github.com/llvm/llvm-project/issues/63220 Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230609034501.407971-1-naveen@kernel.org
2023-06-19powerpc/powernv/sriov: perform null check on iov before dereferencing iovColin Ian King
Currently pointer iov is being dereferenced before the null check of iov which can lead to null pointer dereference errors. Fix this by moving the iov null check before the dereferencing. Detected using cppcheck static analysis: linux/arch/powerpc/platforms/powernv/pci-sriov.c:597:12: warning: Either the condition '!iov' is redundant or there is possible null pointer dereference: iov. [nullPointerRedundantCheck] num_vfs = iov->num_vfs; ^ Fixes: 052da31d45fc ("powerpc/powernv/sriov: De-indent setup and teardown") Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230608095849.1147969-1-colin.i.king@gmail.com
2023-06-19powerpc/ptrace: Expose HASHKEYR register to ptraceBenjamin Gray
The HASHKEYR register contains a secret per-process key to enable unique hashes per process. In general it should not be exposed to userspace at all and a regular process has no need to know its key. However, checkpoint restore in userspace (CRIU) functionality requires that a process be able to set the HASHKEYR of another process, otherwise existing hashes on the stack would be invalidated by a new random key. Exposing HASHKEYR in this way also makes it appear in core dumps, which is a security concern. Multiple threads may share a key, for example just after a fork() call, where the kernel cannot know if the child is going to return back along the parent's stack. If such a thread is coerced into making a core dump, then the HASHKEYR value will be readable and able to be used against all other threads sharing that key, effectively undoing any protection offered by hashst/hashchk. Therefore we expose HASHKEYR to ptrace when CONFIG_CHECKPOINT_RESTORE is enabled, providing a choice of increased security or migratable ROP protected processes. This is similar to how ARM exposes its PAC keys. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Reviewed-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230616034846.311705-8-bgray@linux.ibm.com