summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel
AgeCommit message (Collapse)Author
2024-03-03powerpc/irq: Allow softirq to hardirq stack transitionMichael Ellerman
Allow a transition from the softirq stack to the hardirq stack when handling a hardirq. Doing so means a hardirq received while deep in softirq processing is less likely to cause a stack overflow of the softirq stack. Previously it wasn't safe to do so because irq_exit() (which initiates softirq processing) was called on the hardirq stack. That was changed in commit 1b1b6a6f4cc0 ("powerpc: handle irq_enter/ irq_exit in interrupt handler wrappers") and 1346d00e1bdf ("powerpc: Don't select HAVE_IRQ_EXIT_ON_IRQ_STACK"). The allowed transitions are now: - process stack -> hardirq stack - process stack -> softirq stack - process stack -> softirq stack -> hardirq stack Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231130125045.3080961-1-mpe@ellerman.id.au
2024-03-03powerpc: Stop using of_rootChristophe Leroy
Replace all usages of of_root by of_find_node_by_path("/") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214103152.12269-5-mpe@ellerman.id.au
2024-03-03powerpc/machdep: Define 'compatibles' property in ppc_md and use itChristophe Leroy
Most probe functions that do not use the 'compatible' string do nothing else than checking whether the machine is compatible with one of the strings in a NULL terminated table of strings. Define that table of strings in ppc_md structure and check it directly from probe_machine() instead of using ppc_md.probe() for that. Keep checking in ppc_md.probe() only for more complex probing. All .compatible could be replaced with a single element NULL terminated list but that's not worth the churn. Can be do incrementaly in follow-up patches. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214103152.12269-4-mpe@ellerman.id.au
2024-02-22powerpc/kprobes: Handle error returned by set_memory_rox()Christophe Leroy
set_memory_rox() can fail. In case it fails, free allocated memory and return NULL. Link: https://github.com/KSPP/linux/issues/7 Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/b4907cf4339bd086abc40430d91311436cb0c18e.1708078401.git.christophe.leroy@csgroup.eu
2024-02-22powerpc: Use user_mode() macro when possibleChristophe Leroy
There is a nice macro to check user mode. Use it instead of open coding anding with MSR_PR to increase readability and avoid having to comment what that anding is for. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/fbf74887dcf1f1ba9e1680fc3247cbb581b00662.1708078228.git.christophe.leroy@csgroup.eu
2024-02-21powerpc/pseries: Set CPU_FTR_DBELL according to ibm,pi-featuresNicholas Piggin
PAPR will define a new ibm,pi-features bit which says that doorbells should not be used even on architectures where they exist. This could be because they are emulated and slower than using the interrupt controller directly for IPIs. Wire this bit into the pi-features parser to clear CPU_FTR_DBELL, and ensure CPU_FTR_DBELL is not in CPU_FTRS_ALWAYS. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240207035220.339726-2-npiggin@gmail.com
2024-02-21powerpc/pseries: Add a clear modifier to ibm,pa/pi-features parserNicholas Piggin
When a new ibm,pa/pi-features bit is introduced that is intended to apply to existing systems and features, it may have an "inverted" meaning (i.e., bit clear => feature available; bit set => unavailable). Depending on the nature of the feature, this may give the best backward compatibility result where old firmware will continue to have that bit clear and therefore the feature available. The 'invert' modifier presumably was introduced for this type of feature bit. However it invert will set the feature if the bit is clear, which prevents it being used in the situation where an old CPU lacks a feature that a new CPU has, then a new firmware comes out to disable that feature on the new CPU if the bit is set. Adding an 'invert' entry for that feature would incorrectly enable it for the old CPU. So add a 'clear' modifier that clears the feature if the bit is set, but it does not set the feature if the bit is clear. The feature is expected to be set in the cpu table. This replaces the 'invert' modifier, which is unused since commit 7d4703455168 ("powerpc/feature: Remove CPU_FTR_NODSISRALIGN"). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240207035220.339726-1-npiggin@gmail.com
2024-02-21powerpc: Add Power11 architected and raw modeMadhavan Srinivasan
Add CPU table entries for raw and architected mode. Most fields are copied from the Power10 table entries. CPU, MMU and user (ELF_HWCAP) features are unchanged vs P10. However userspace can detect P11 because the AT_PLATFORM value changes to "power11". The logical PVR value of 0x0F000007, passed to firmware via the ibm_arch_vec, indicates the kernel can support a P11 compatible CPU, which means at least ISA v3.1 compliant. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240221044623.1598642-1-mpe@ellerman.id.au
2024-02-21powerpc: Remove duplicate/unnecessary ifdefsShrikanth Hegde
When an ifdef is used in the below manner, second one could be considered as duplicate. ifdef DEFINE_A ...code block... ifdef DEFINE_A <-- This is a duplicate. ...code block... endif else ifndef DEFINE_A <-- This is also duplicate. ...code block... endif endif More details about the script and methods used to find these code patterns are in cover letter of [1]. Few places in arch/powerpc where this pattern was seen: paca.h: Hunk1: Code is under check of CONFIG_PPC64 from line 13, hence the second CONFIG_PPC64 at line 166 is a duplicate. Hunk2: CONFIG_PPC_BOOK3S_64 was defined back to back. Merged the two ifdefs. asm-offsets.c: Code is under check of CONFIG_PPC64 from line 176 hence second CONFIG_PPC64 at line 249 is a duplicate. powermac/feature.c: #ifndef CONFIG_PPC64 is used at line 2066. And then in #else again #ifdef CONFIG_PPC64 is used. Which is a duplicate since in #else means CONFIG_PPC64 is defined. xmon.c: Code is under the check of CONFIG_SMP from line 521 hence the same check of CONFIG_SMP at line 646 is a duplicate. No functional change is intended here. It only aims to improve code readability. [1] https://lore.kernel.org/all/20240118080326.13137-1-sshegde@linux.ibm.com/ Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240216053016.528906-1-sshegde@linux.ibm.com
2024-02-19powerpc: remove unused KCSAN_SANITIZE_early_64.o in MakefileMasahiro Yamada
Commit 2fb857bc9f9e ("powerpc/kcsan: Add exclusions from instrumentation") added KCSAN_SANITIZE_early_64.o to arch/powerpc/kernel/Makefile, while it does not compile early_64.o. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20240216135817.2003106-1-masahiroy@kernel.org
2024-02-15powerpc/smp: Remap boot CPU onto core 0 if >= nr_cpu_idsMichael Ellerman
If nr_cpu_ids is too low to include the boot CPU, remap the boot CPU onto logical core 0. This is achieved in two stages. In early_init_dt_scan_cpus() the boot CPU is renumbered to be on logical core 0, and the original boot core's hardware ID is recorded. Later in smp_setup_cpu_maps(), if the original boot core ID is set, the logical CPU numbers on the 0th core are skipped in the normal device tree search over CPU device tree nodes. Then the search is continued until the device tree node matching the boot core is found, and those CPUs are assigned the CPU numbers starting at 0. This allows kdump kernels to be booted with low values for nr_cpu_ids to conserve memory, while also allowing the crashing/boot CPU to be any CPU. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Wen Xiong <wenxiong@us.ibm.com> Link: https://msgid.link/20231229120107.2281153-5-mpe@ellerman.id.au
2024-02-15powerpc/smp: Factor out assign_threads()Michael Ellerman
Factor out the for loop that assigns CPU numbers to threads of a core. The function takes the next CPU number to use as input, and returns the next available CPU number after the threads has been assigned. This will allow a subsequent change to assign threads out of order. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231229120107.2281153-4-mpe@ellerman.id.au
2024-02-15powerpc/smp: Lookup avail once per device tree nodeMichael Ellerman
The of_device_is_available() check only needs to be done once per device node, there's no need to repeat it for each thread. Move it out of the loop. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231229120107.2281153-3-mpe@ellerman.id.au
2024-02-15powerpc/smp: Increase nr_cpu_ids to include the boot CPUMichael Ellerman
If nr_cpu_ids is too low to include the boot CPU adjust nr_cpu_ids upward. Otherwise the kernel will BUG when trying to allocate a paca for the boot CPU and fail to boot. Cc: stable@vger.kernel.org Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231229120107.2281153-2-mpe@ellerman.id.au
2024-02-15powerpc/smp: Adjust nr_cpu_ids to cover all threads of a coreMichael Ellerman
If nr_cpu_ids is too low to include at least all the threads of a single core adjust nr_cpu_ids upwards. This avoids triggering odd bugs in code that assumes all threads of a core are available. Cc: stable@vger.kernel.org Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231229120107.2281153-1-mpe@ellerman.id.au
2024-01-10Merge tag 'header_cleanup-2024-01-10' of https://evilpiepirate.org/git/bcachefsLinus Torvalds
Pull header cleanups from Kent Overstreet: "The goal is to get sched.h down to a type only header, so the main thing happening in this patchset is splitting out various _types.h headers and dependency fixups, as well as moving some things out of sched.h to better locations. This is prep work for the memory allocation profiling patchset which adds new sched.h interdepencencies" * tag 'header_cleanup-2024-01-10' of https://evilpiepirate.org/git/bcachefs: (51 commits) Kill sched.h dependency on rcupdate.h kill unnecessary thread_info.h include Kill unnecessary kernel.h include preempt.h: Kill dependency on list.h rseq: Split out rseq.h from sched.h LoongArch: signal.c: add header file to fix build error restart_block: Trim includes lockdep: move held_lock to lockdep_types.h sem: Split out sem_types.h uidgid: Split out uidgid_types.h seccomp: Split out seccomp_types.h refcount: Split out refcount_types.h uapi/linux/resource.h: fix include x86/signal: kill dependency on time.h syscall_user_dispatch.h: split out *_types.h mm_types_task.h: Trim dependencies Split out irqflags_types.h ipc: Kill bogus dependency on spinlock.h shm: Slim down dependencies workqueue: Split out workqueue_types.h ...
2024-01-10Merge tag 'bcachefs-2024-01-10' of https://evilpiepirate.org/git/bcachefsLinus Torvalds
Pull bcachefs updates from Kent Overstreet: - btree write buffer rewrite: instead of adding keys to the btree write buffer at transaction commit time, we now journal them with a different journal entry type and copy them from the journal to the write buffer just prior to journal write. This reduces the number of atomic operations on shared cachelines in the transaction commit path and is a signicant performance improvement on some workloads: multithreaded 4k random writes went from ~650k iops to ~850k iops. - Bring back optimistic spinning for six locks: the new implementation doesn't use osq locks; instead we add to the lock waitlist as normal, and then spin on the lock_acquired bit in the waitlist entry, _not_ the lock itself. - New ioctls: - BCH_IOCTL_DEV_USAGE_V2, which allows for new data types - BCH_IOCTL_OFFLINE_FSCK, which runs the kernel implementation of fsck but without mounting: useful for transparently using the kernel version of fsck from 'bcachefs fsck' when the kernel version is a better match for the on disk filesystem. - BCH_IOCTL_ONLINE_FSCK: online fsck. Not all passes are supported yet, but the passes that are supported are fully featured - errors may be corrected as normal. The new ioctls use the new 'thread_with_file' abstraction for kicking off a kthread that's tied to a file descriptor returned to userspace via the ioctl. - btree_paths within a btree_trans are now dynamically growable, instead of being limited to 64. This is important for the check_directory_structure phase of fsck, and also fixes some issues we were having with btree path overflow in the reflink btree. - Trigger refactoring; prep work for the upcoming disk space accounting rewrite - Numerous bugfixes :) * tag 'bcachefs-2024-01-10' of https://evilpiepirate.org/git/bcachefs: (226 commits) bcachefs: eytzinger0_find() search should be const bcachefs: move "ptrs not changing" optimization to bch2_trigger_extent() bcachefs: fix simulateously upgrading & downgrading bcachefs: Restart recovery passes more reliably bcachefs: bch2_dump_bset() doesn't choke on u64s == 0 bcachefs: improve checksum error messages bcachefs: improve validate_bset_keys() bcachefs: print sb magic when relevant bcachefs: __bch2_sb_field_to_text() bcachefs: %pg is banished bcachefs: Improve would_deadlock trace event bcachefs: fsck_err()s don't need to manually check c->sb.version anymore bcachefs: Upgrades now specify errors to fix, like downgrades bcachefs: no thread_with_file in userspace bcachefs: Don't autofix errors we can't fix bcachefs: add missing bch2_latency_acct() call bcachefs: increase max_active on io_complete_wq bcachefs: add time_stats for btree_node_read_done() bcachefs: don't clear accessed bit in btree node fill bcachefs: Add an option to control btree node prefetching ...
2024-01-09Merge tag 'lsm-pr-20240105' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/lsm Pull security module updates from Paul Moore: - Add three new syscalls: lsm_list_modules(), lsm_get_self_attr(), and lsm_set_self_attr(). The first syscall simply lists the LSMs enabled, while the second and third get and set the current process' LSM attributes. Yes, these syscalls may provide similar functionality to what can be found under /proc or /sys, but they were designed to support multiple, simultaneaous (stacked) LSMs from the start as opposed to the current /proc based solutions which were created at a time when only one LSM was allowed to be active at a given time. We have spent considerable time discussing ways to extend the existing /proc interfaces to support multiple, simultaneaous LSMs and even our best ideas have been far too ugly to support as a kernel API; after +20 years in the kernel, I felt the LSM layer had established itself enough to justify a handful of syscalls. Support amongst the individual LSM developers has been nearly unanimous, with a single objection coming from Tetsuo (TOMOYO) as he is worried that the LSM_ID_XXX token concept will make it more difficult for out-of-tree LSMs to survive. Several members of the LSM community have demonstrated the ability for out-of-tree LSMs to continue to exist by picking high/unused LSM_ID values as well as pointing out that many kernel APIs rely on integer identifiers, e.g. syscalls (!), but unfortunately Tetsuo's objections remain. My personal opinion is that while I have no interest in penalizing out-of-tree LSMs, I'm not going to penalize in-tree development to support out-of-tree development, and I view this as a necessary step forward to support the push for expanded LSM stacking and reduce our reliance on /proc and /sys which has occassionally been problematic for some container users. Finally, we have included the linux-api folks on (all?) recent revisions of the patchset and addressed all of their concerns. - Add a new security_file_ioctl_compat() LSM hook to handle the 32-bit ioctls on 64-bit systems problem. This patch includes support for all of the existing LSMs which provide ioctl hooks, although it turns out only SELinux actually cares about the individual ioctls. It is worth noting that while Casey (Smack) and Tetsuo (TOMOYO) did not give explicit ACKs to this patch, they did both indicate they are okay with the changes. - Fix a potential memory leak in the CALIPSO code when IPv6 is disabled at boot. While it's good that we are fixing this, I doubt this is something users are seeing in the wild as you need to both disable IPv6 and then attempt to configure IPv6 labeled networking via NetLabel/CALIPSO; that just doesn't make much sense. Normally this would go through netdev, but Jakub asked me to take this patch and of all the trees I maintain, the LSM tree seemed like the best fit. - Update the LSM MAINTAINERS entry with additional information about our process docs, patchwork, bug reporting, etc. I also noticed that the Lockdown LSM is missing a dedicated MAINTAINERS entry so I've added that to the pull request. I've been working with one of the major Lockdown authors/contributors to see if they are willing to step up and assume a Lockdown maintainer role; hopefully that will happen soon, but in the meantime I'll continue to look after it. - Add a handful of mailmap entries for Serge Hallyn and myself. * tag 'lsm-pr-20240105' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/lsm: (27 commits) lsm: new security_file_ioctl_compat() hook lsm: Add a __counted_by() annotation to lsm_ctx.ctx calipso: fix memory leak in netlbl_calipso_add_pass() selftests: remove the LSM_ID_IMA check in lsm/lsm_list_modules_test MAINTAINERS: add an entry for the lockdown LSM MAINTAINERS: update the LSM entry mailmap: add entries for Serge Hallyn's dead accounts mailmap: update/replace my old email addresses lsm: mark the lsm_id variables are marked as static lsm: convert security_setselfattr() to use memdup_user() lsm: align based on pointer length in lsm_fill_user_ctx() lsm: consolidate buffer size handling into lsm_fill_user_ctx() lsm: correct error codes in security_getselfattr() lsm: cleanup the size counters in security_getselfattr() lsm: don't yet account for IMA in LSM_CONFIG_COUNT calculation lsm: drop LSM_ID_IMA LSM: selftests for Linux Security Module syscalls SELinux: Add selfattr hooks AppArmor: Add selfattr hooks Smack: implement setselfattr and getselfattr hooks ...
2024-01-08Merge tag 'powerpc-6.8-1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc updates from Michael Ellerman: - Add initial support to recognise the HeXin C2000 processor. - Add papr-vpd and papr-sysparm character device drivers for VPD & sysparm retrieval, so userspace tools can be adapted to avoid doing raw firmware calls from userspace. - Sched domains optimisations for shared processor partitions on P9/P10. - A series of optimisations for KVM running as a nested HV under PowerVM. - Other small features and fixes. Thanks to Aditya Gupta, Aneesh Kumar K.V, Arnd Bergmann, Christophe Leroy, Colin Ian King, Dario Binacchi, David Heidelberg, Geoff Levand, Gustavo A. R. Silva, Haoran Liu, Jordan Niethe, Kajol Jain, Kevin Hao, Kunwu Chan, Li kunyu, Li zeming, Masahiro Yamada, Michal Suchánek, Nathan Lynch, Naveen N Rao, Nicholas Piggin, Randy Dunlap, Sathvika Vasireddy, Srikar Dronamraju, Stephen Rothwell, Vaibhav Jain, and Zhao Ke. * tag 'powerpc-6.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (96 commits) powerpc/ps3_defconfig: Disable PPC64_BIG_ENDIAN_ELF_ABI_V2 powerpc/86xx: Drop unused CONFIG_MPC8610 powerpc/powernv: Add error handling to opal_prd_range_is_valid selftests/powerpc: Fix spelling mistake "EACCESS" -> "EACCES" powerpc/hvcall: Reorder Nestedv2 hcall opcodes powerpc/ps3: Add missing set_freezable() for ps3_probe_thread() powerpc/mpc83xx: Use wait_event_freezable() for freezable kthread powerpc/mpc83xx: Add the missing set_freezable() for agent_thread_fn() powerpc/fsl: Fix fsl,tmu-calibration to match the schema powerpc/smp: Dynamically build Powerpc topology powerpc/smp: Avoid asym packing within thread_group of a core powerpc/smp: Add __ro_after_init attribute powerpc/smp: Disable MC domain for shared processor powerpc/smp: Enable Asym packing for cores on shared processor powerpc/sched: Cleanup vcpu_is_preempted() powerpc: add cpu_spec.cpu_features to vmcoreinfo powerpc/imc-pmu: Add a null pointer check in update_events_in_group() powerpc/powernv: Add a null pointer check in opal_powercap_init() powerpc/powernv: Add a null pointer check in opal_event_init() powerpc/powernv: Add a null pointer check to scom_debug_init_one() ...
2024-01-08Merge tag 'vfs-6.8.mount' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs Pull vfs mount updates from Christian Brauner: "This contains the work to retrieve detailed information about mounts via two new system calls. This is hopefully the beginning of the end of the saga that started with fsinfo() years ago. The LWN articles in [1] and [2] can serve as a summary so we can avoid rehashing everything here. At LSFMM in May 2022 we got into a room and agreed on what we want to do about fsinfo(). Basically, split it into pieces. This is the first part of that agreement. Specifically, it is concerned with retrieving information about mounts. So this only concerns the mount information retrieval, not the mount table change notification, or the extended filesystem specific mount option work. That is separate work. Currently mounts have a 32bit id. Mount ids are already in heavy use by libmount and other low-level userspace but they can't be relied upon because they're recycled very quickly. We agreed that mounts should carry a unique 64bit id by which they can be referenced directly. This is now implemented as part of this work. The new 64bit mount id is exposed in statx() through the new STATX_MNT_ID_UNIQUE flag. If the flag isn't raised the old mount id is returned. If it is raised and the kernel supports the new 64bit mount id the flag is raised in the result mask and the new 64bit mount id is returned. New and old mount ids do not overlap so they cannot be conflated. Two new system calls are introduced that operate on the 64bit mount id: statmount() and listmount(). A summary of the api and usage can be found on LWN as well (cf. [3]) but of course, I'll provide a summary here as well. Both system calls rely on struct mnt_id_req. Which is the request struct used to pass the 64bit mount id identifying the mount to operate on. It is extensible to allow for the addition of new parameters and for future use in other apis that make use of mount ids. statmount() mimicks the semantics of statx() and exposes a set flags that userspace may raise in mnt_id_req to request specific information to be retrieved. A statmount() call returns a struct statmount filled in with information about the requested mount. Supported requests are indicated by raising the request flag passed in struct mnt_id_req in the @mask argument in struct statmount. Currently we do support: - STATMOUNT_SB_BASIC: Basic filesystem info - STATMOUNT_MNT_BASIC Mount information (mount id, parent mount id, mount attributes etc) - STATMOUNT_PROPAGATE_FROM Propagation from what mount in current namespace - STATMOUNT_MNT_ROOT Path of the root of the mount (e.g., mount --bind /bla /mnt returns /bla) - STATMOUNT_MNT_POINT Path of the mount point (e.g., mount --bind /bla /mnt returns /mnt) - STATMOUNT_FS_TYPE Name of the filesystem type as the magic number isn't enough due to submounts The string options STATMOUNT_MNT_{ROOT,POINT} and STATMOUNT_FS_TYPE are appended to the end of the struct. Userspace can use the offsets in @fs_type, @mnt_root, and @mnt_point to reference those strings easily. The struct statmount reserves quite a bit of space currently for future extensibility. This isn't really a problem and if this bothers us we can just send a follow-up pull request during this cycle. listmount() is given a 64bit mount id via mnt_id_req just as statmount(). It takes a buffer and a size to return an array of the 64bit ids of the child mounts of the requested mount. Userspace can thus choose to either retrieve child mounts for a mount in batches or iterate through the child mounts. For most use-cases it will be sufficient to just leave space for a few child mounts. But for big mount tables having an iterator is really helpful. Iterating through a mount table works by setting @param in mnt_id_req to the mount id of the last child mount retrieved in the previous listmount() call" Link: https://lwn.net/Articles/934469 [1] Link: https://lwn.net/Articles/829212 [2] Link: https://lwn.net/Articles/950569 [3] * tag 'vfs-6.8.mount' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: add selftest for statmount/listmount fs: keep struct mnt_id_req extensible wire up syscalls for statmount/listmount add listmount(2) syscall statmount: simplify string option retrieval statmount: simplify numeric option retrieval add statmount(2) syscall namespace: extract show_path() helper mounts: keep list of mounts in an rbtree add unique mount ID
2024-01-01powerpc: Export kvm_guest static key, for bcachefs six locksKent Overstreet
bcachefs's six locks need kvm_guest, via ower_on_cpu() -> vcpu_is_preempted() -> is_kvm_guest() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> Cc: linuxppc-dev@lists.ozlabs.org Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
2023-12-27rseq: Split out rseq.h from sched.hKent Overstreet
We're trying to get sched.h down to more or less just types only, not code - rseq can live in its own header. This helps us kill the dependency on preempt.h in sched.h. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-12-15Merge branch 'smp-topo' into nextMichael Ellerman
Merge a branch containing SMP topology updates from Srikar, purely so we can include the cover letter which has a lot of good detail here: PowerVM systems configured in shared processors mode have some unique challenges. Some device-tree properties will be missing on a shared processor. Hence some sched domains may not make sense for shared processor systems. Most shared processor systems are over-provisioned. Underlying PowerVM Hypervisor would schedule at a Big Core (SMT8) granularity. The most recent power processors support two almost independent cores. In a lightly loaded condition, it helps the overall system performance if we pack to lesser number of Big Cores. Since each thread-group is independent, running threads on both the thread-groups of a SMT8 core, should have a minimal adverse impact in non over provisioned scenarios. These changes in this patchset will not affect in the over provisioned scenario. If there are more threads than SMT domains, then asym_packing will not kick-in. System Configuration type=Shared mode=Uncapped smt=8 lcpu=96 mem=1066409344 kB cpus=96 ent=64.00 So *64 Entitled cores/ 96 Virtual processor* Scenario lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 768 On-line CPU(s) list: 0-767 Model name: POWER10 (architected), altivec supported Model: 2.0 (pvr 0080 0200) Thread(s) per core: 8 Core(s) per socket: 16 Socket(s): 6 Hypervisor vendor: pHyp Virtualization type: para L1d cache: 6 MiB (192 instances) L1i cache: 9 MiB (192 instances) NUMA node(s): 6 NUMA node0 CPU(s): 0-7,32-39,80-87,128-135,176-183,224-231,272-279,320-327,368-375,416-423,464-471,512-519,560-567,608-615,656-663,704-711,752-759 NUMA node1 CPU(s): 8-15,40-47,88-95,136-143,184-191,232-239,280-287,328-335,376-383,424-431,472-479,520-527,568-575,616-623,664-671,712-719,760-767 NUMA node4 CPU(s): 64-71,112-119,160-167,208-215,256-263,304-311,352-359,400-407,448-455,496-503,544-551,592-599,640-647,688-695,736-743 NUMA node5 CPU(s): 16-23,48-55,96-103,144-151,192-199,240-247,288-295,336-343,384-391,432-439,480-487,528-535,576-583,624-631,672-679,720-727 NUMA node6 CPU(s): 72-79,120-127,168-175,216-223,264-271,312-319,360-367,408-415,456-463,504-511,552-559,600-607,648-655,696-703,744-751 NUMA node7 CPU(s): 24-31,56-63,104-111,152-159,200-207,248-255,296-303,344-351,392-399,440-447,488-495,536-543,584-591,632-639,680-687,728-735 ebizzy -t 32 -S 200 (5 iterations) Records per second. (Higher is better) Kernel N Min Max Median Avg Stddev %Change 6.6.0-rc3 5 3840178 4059268 3978042 3973936.6 84264.456 +patch 5 3768393 3927901 3874994 3854046 71532.926 -3.01692 >From lparstat (when the workload stabilized) Kernel %user %sys %wait %idle physc %entc lbusy app vcsw phint 6.6.0-rc3 4.16 0.00 0.00 95.84 26.06 40.72 4.16 69.88 276906989 578 +patch 4.16 0.00 0.00 95.83 17.70 27.66 4.17 78.26 70436663 119 ebizzy -t 128 -S 200 (5 iterations) Records per second. (Higher is better) Kernel N Min Max Median Avg Stddev %Change 6.6.0-rc3 5 5520692 5981856 5717709 5727053.2 176093.2 +patch 5 5305888 6259610 5854590 5843311 375917.03 2.02998 >From lparstat (when the workload stabilized) Kernel %user %sys %wait %idle physc %entc lbusy app vcsw phint 6.6.0-rc3 16.66 0.00 0.00 83.33 45.49 71.08 16.67 50.50 288778533 581 +patch 16.65 0.00 0.00 83.35 30.15 47.11 16.65 65.76 85196150 133 ebizzy -t 512 -S 200 (5 iterations) Records per second. (Higher is better) Kernel N Min Max Median Avg Stddev %Change 6.6.0-rc3 5 19563921 20049955 19701510 19728733 198295.18 +patch 5 19455992 20176445 19718427 19832017 304094.05 0.523521 >From lparstat (when the workload stabilized) %Kernel user %sys %wait %idle physc %entc lbusy app vcsw phint 66.6.0-rc3 6.44 0.01 0.00 33.55 94.14 147.09 66.45 1.33 313345175 621 6+patch 6.44 0.01 0.00 33.55 94.15 147.11 66.45 1.33 109193889 309 System Configuration type=Shared mode=Uncapped smt=8 lcpu=40 mem=1067539392 kB cpus=96 ent=40.00 So *40 Entitled cores/ 40 Virtual processor* Scenario lscpu Architecture: ppc64le Byte Order: Little Endian CPU(s): 320 On-line CPU(s) list: 0-319 Model name: POWER10 (architected), altivec supported Model: 2.0 (pvr 0080 0200) Thread(s) per core: 8 Core(s) per socket: 10 Socket(s): 4 Hypervisor vendor: pHyp Virtualization type: para L1d cache: 2.5 MiB (80 instances) L1i cache: 3.8 MiB (80 instances) NUMA node(s): 4 NUMA node0 CPU(s): 0-7,32-39,64-71,96-103,128-135,160-167,192-199,224-231,256-263,288-295 NUMA node1 CPU(s): 8-15,40-47,72-79,104-111,136-143,168-175,200-207,232-239,264-271,296-303 NUMA node4 CPU(s): 16-23,48-55,80-87,112-119,144-151,176-183,208-215,240-247,272-279,304-311 NUMA node5 CPU(s): 24-31,56-63,88-95,120-127,152-159,184-191,216-223,248-255,280-287,312-319 ebizzy -t 32 -S 200 (5 iterations) Records per second. (Higher is better) Kernel N Min Max Median Avg Stddev %Change 6.6.0-rc3 5 3535518 3864532 3745967 3704233.2 130216.76 +patch 5 3608385 3708026 3649379 3651596.6 37862.163 -1.42099 %Kernel user %sys %wait %idle physc %entc lbusy app vcsw phint 6.6.0-rc3 10.00 0.01 0.00 89.99 22.98 57.45 10.01 41.01 1135139 262 +patch 10.00 0.00 0.00 90.00 16.95 42.37 10.00 47.05 925561 19 ebizzy -t 64 -S 200 (5 iterations) Records per second. (Higher is better) Kernel N Min Max Median Avg Stddev %Change 6.6.0-rc3 5 4434984 4957281 4548786 4591298.2 211770.2 +patch 5 4461115 4835167 4544716 4607795.8 151474.85 0.359323 %Kernel user %sys %wait %idle physc %entc lbusy app vcsw phint 6.6.0-rc3 20.01 0.00 0.00 79.99 38.22 95.55 20.01 25.77 1287553 265 +patch 19.99 0.00 0.00 80.01 25.55 63.88 19.99 38.44 1077341 20 ebizzy -t 256 -S 200 (5 iterations) Records per second. (Higher is better) Kernel N Min Max Median Avg Stddev %Change 6.6.0-rc3 5 8850648 8982659 8951911 8936869.2 52278.031 +patch 5 8751038 9060510 8981409 8942268.4 117070.6 0.0604149 %Kernel user %sys %wait %idle physc %entc lbusy app vcsw phint 6.6.0-rc3 80.02 0.01 0.01 19.96 40.00 100.00 80.03 24.00 1597665 276 +patch 80.02 0.01 0.01 19.96 40.00 100.00 80.03 23.99 1383921 63 Observation: We are able to see Improvement in ebizzy throughput even with lesser core utilization (almost half the core utilization) in low utilization scenarios while still retaining throughput in mid and higher utilization scenarios. Note: The numbers are with Uncapped + no-noise case. In the Capped and/or noise case, due to contention on the Cores, the numbers are expected to further improve. Note: The numbers included (sched/fair: Enable group_asym_packing in find_idlest_group) https://lore.kernel.org/all/20231018155036.2314342-1-srikar@linux.vnet.ibm.com/
2023-12-15powerpc/smp: Dynamically build Powerpc topologySrikar Dronamraju
Currently there are four Powerpc specific sched topologies. These are all statically defined. However not all these topologies are used by all Powerpc systems. To avoid unnecessary degenerations by the scheduler, masks and flags are compared. However if the sched topologies are build dynamically then the code is simpler and there are greater chances of avoiding degenerations. Note: Even X86 builds its sched topologies dynamically and proposed changes are very similar to the way X86 is building its topologies. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214180720.310852-6-srikar@linux.vnet.ibm.com
2023-12-15powerpc/smp: Avoid asym packing within thread_group of a coreSrikar Dronamraju
PowerVM Hypervisor will schedule at a core granularity. However each core can have more than one thread_groups. For better utilization in case of a shared processor, its preferable for the scheduler to pack to the lowest core. However there is no benefit of moving a thread between two thread groups of the same core. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214180720.310852-5-srikar@linux.vnet.ibm.com
2023-12-15powerpc/smp: Add __ro_after_init attributeSrikar Dronamraju
There are some variables that are only updated at boot time. So add __ro_after_init attribute to such variables Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214180720.310852-4-srikar@linux.vnet.ibm.com
2023-12-15powerpc/smp: Disable MC domain for shared processorSrikar Dronamraju
Like L2-cache info, coregroup information which is used to determine MC sched domains is only present on dedicated LPARs. i.e PowerVM doesn't export coregroup information for shared processor LPARs. Hence disable creating MC domains on shared LPAR Systems. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214180720.310852-3-srikar@linux.vnet.ibm.com
2023-12-15powerpc/smp: Enable Asym packing for cores on shared processorSrikar Dronamraju
If there are shared processor LPARs, underlying Hypervisor can have more virtual cores to handle than actual physical cores. Starting with Power 9, a big core (aka SMT8 core) has 2 nearly independent thread groups. On a shared processors LPARs, it helps to pack threads to lesser number of cores so that the overall system performance and utilization improves. PowerVM schedules at a big core level. Hence packing to fewer cores helps. Since each thread-group is independent, running threads on both the thread-groups of a SMT8 core, should have a minimal adverse impact in non over provisioned scenarios. These changes in this patchset will not affect in the over provisioned scenario. If there are more threads than SMT domains, then asym_packing will not kick-in For example: Lets says there are two 8-core Shared LPARs that are actually sharing a 8 Core shared physical pool, each running 8 threads each. Then Consolidating 8 threads to 4 cores on each LPAR would help them to perform better. This is because each of the LPAR will get 100% time to run applications and there will no switching required by the Hypervisor. To achieve this, enable SD_ASYM_PACKING flag at CACHE, MC and DIE level when the system is running in shared processor mode and has big cores. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231214180720.310852-2-srikar@linux.vnet.ibm.com
2023-12-14wire up syscalls for statmount/listmountMiklos Szeredi
Wire up all archs. Signed-off-by: Miklos Szeredi <mszeredi@redhat.com> Link: https://lore.kernel.org/r/20231025140205.3586473-7-mszeredi@redhat.com Reviewed-by: Ian Kent <raven@themaw.net> Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-12-13powerpc/ftrace: Remove nops after the call to ftrace_stubNaveen N Rao
ftrace_stub is within the same CU, so there is no need for a subsequent nop instruction. Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/8ee5ec520e37d5523654bb2cd65a17512fb774e2.1702045299.git.naveen@kernel.org
2023-12-13powerpc/rtas: Warn if per-function lock isn't heldNathan Lynch
If the function descriptor has a populated lock member, then callers are required to hold it across calls. Now that the firmware activation sequence is appropriately guarded, we can warn when the requirement isn't satisfied. __do_enter_rtas_trace() gets reorganized a bit as a result of performing the function descriptor lookup unconditionally now. Reviewed-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-8-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Serialize firmware activation sequencesNathan Lynch
Use rtas_ibm_activate_firmware_lock to prevent interleaving call sequences of the ibm,activate-firmware RTAS function, which typically requires multiple calls to complete the update. While the spec does not specifically prohibit interleaved sequences, there's almost certainly no advantage to allowing them. Reviewed-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-7-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Facilitate high-level call sequencesNathan Lynch
On RTAS platforms there is a general restriction that the OS must not enter RTAS on more than one CPU at a time. This low-level serialization requirement is satisfied by holding a spin lock (rtas_lock) across most RTAS function invocations. However, some pseries RTAS functions require multiple successive calls to complete a logical operation. Beginning a new call sequence for such a function may disrupt any other sequences of that function already in progress. Safe and reliable use of these functions effectively requires higher-level serialization beyond what is already done at the level of RTAS entry and exit. Where a sequence-based RTAS function is invoked only through sys_rtas(), with no in-kernel users, there is no issue as far as the kernel is concerned. User space is responsible for appropriately serializing its call sequences. (Whether user space code actually takes measures to prevent sequence interleaving is another matter.) Examples of such functions currently include ibm,platform-dump and ibm,get-vpd. But where a sequence-based RTAS function has both user space and in-kernel uesrs, there is a hazard. Even if the in-kernel call sites of such a function serialize their sequences correctly, a user of sys_rtas() can invoke the same function at any time, potentially disrupting a sequence in progress. So in order to prevent disruption of kernel-based RTAS call sequences, they must serialize not only with themselves but also with sys_rtas() users, somehow. Preferably without adding more function-specific hacks to sys_rtas(). This is a prerequisite for adding an in-kernel call sequence of ibm,get-vpd, which is in a change to follow. Note that it has never been feasible for the kernel to prevent sys_rtas()-based sequences from being disrupted because control returns to user space on every call. sys_rtas()-based users of these functions have always been, and continue to be, responsible for coordinating their call sequences with other users, even those which may invoke the RTAS functions through less direct means than sys_rtas(). This is an unavoidable consequence of exposing sequence-based RTAS functions through sys_rtas(). * Add an optional mutex member to struct rtas_function. * Statically define a mutex for each RTAS function with known call sequence serialization requirements, and assign its address to the .lock member of the corresponding function table entry, along with justifying commentary. * In sys_rtas(), if the table entry for the RTAS function being called has a populated lock member, acquire it before taking rtas_lock and entering RTAS. * Kernel-based RTAS call sequences are expected to access the appropriate mutex explicitly by name. For example, a user of the ibm,activate-firmware RTAS function would do: int token = rtas_function_token(RTAS_FN_IBM_ACTIVATE_FIRMWARE); int fwrc; mutex_lock(&rtas_ibm_activate_firmware_lock); do { fwrc = rtas_call(token, 0, 1, NULL); } while (rtas_busy_delay(fwrc)); mutex_unlock(&rtas_ibm_activate_firmware_lock); There should be no perceivable change introduced here except that concurrent callers of the same RTAS function via sys_rtas() may block on a mutex instead of spinning on rtas_lock. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-6-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Move token validation from block_rtas_call() to sys_rtas()Nathan Lynch
The rtas system call handler sys_rtas() delegates certain input validation steps to a helper function: block_rtas_call(). One of these steps ensures that the user-supplied token value maps to a known RTAS function. This is done by performing a "reverse" token-to-function lookup via rtas_token_to_function_untrusted() to obtain an rtas_function object. In changes to come, sys_rtas() itself will need the function descriptor for the token. To prepare: * Move the lookup and validation up into sys_rtas() and pass the resulting rtas_function pointer to block_rtas_call(), which is otherwise unconcerned with the token value. * Change block_rtas_call() to report the RTAS function name instead of the token value on validation failures, since it can now rely on having a valid function descriptor. One behavior change is that sys_rtas() now silently errors out when passed a bad token, before calling block_rtas_call(). So we will no longer log "RTAS call blocked - exploit attempt?" on invalid tokens. This is consistent with how sys_rtas() currently handles other "metadata" (nargs and nret), while block_rtas_call() is primarily concerned with validating the arguments to be passed to specific RTAS functions. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-5-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Fall back to linear search on failed token->function lookupNathan Lynch
Enabling any of the powerpc:rtas_* tracepoints at boot is likely to result in an oops on RTAS platforms. For example, booting a QEMU pseries model with 'trace_event=powerpc:rtas_input' in the command line leads to: BUG: Kernel NULL pointer dereference on read at 0x00000008 Oops: Kernel access of bad area, sig: 7 [#1] NIP [c00000000004231c] do_enter_rtas+0x1bc/0x460 LR [c00000000004231c] do_enter_rtas+0x1bc/0x460 Call Trace: do_enter_rtas+0x1bc/0x460 (unreliable) rtas_call+0x22c/0x4a0 rtas_get_boot_time+0x80/0x14c read_persistent_clock64+0x124/0x150 read_persistent_wall_and_boot_offset+0x28/0x58 timekeeping_init+0x70/0x348 start_kernel+0xa0c/0xc1c start_here_common+0x1c/0x20 (This is preceded by a warning for the failed lookup in rtas_token_to_function().) This happens when __do_enter_rtas_trace() attempts a token to function descriptor lookup before the xarray containing the mappings has been set up. Fall back to linear scan of the table if rtas_token_to_function_xarray is empty. Fixes: 24098f580e2b ("powerpc/rtas: add tracepoints around RTAS entry") Reviewed-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-3-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Add for_each_rtas_function() iteratorNathan Lynch
Add a convenience macro for iterating over every element of the internal function table and convert the one site that can use it. An additional user of the macro is anticipated in changes to follow. Reviewed-by: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-2-e9eafd0c8c6c@linux.ibm.com
2023-12-13powerpc/rtas: Avoid warning on invalid token argument to sys_rtas()Nathan Lynch
rtas_token_to_function() WARNs when passed an invalid token; it's meant to catch bugs in kernel-based users of RTAS functions. However, user space controls the token value passed to rtas_token_to_function() by block_rtas_call(), so user space with sufficient privilege to use sys_rtas() can trigger the warnings at will: unexpected failed lookup for token 2048 WARNING: CPU: 20 PID: 2247 at arch/powerpc/kernel/rtas.c:556 rtas_token_to_function+0xfc/0x110 ... NIP rtas_token_to_function+0xfc/0x110 LR rtas_token_to_function+0xf8/0x110 Call Trace: rtas_token_to_function+0xf8/0x110 (unreliable) sys_rtas+0x188/0x880 system_call_exception+0x268/0x530 system_call_common+0x160/0x2c4 It's desirable to continue warning on bogus tokens in rtas_token_to_function(). Currently it is used to look up RTAS function descriptors when tracing, where we know there has to have been a successful descriptor lookup by different means already, and it would be a serious inconsistency for the reverse lookup to fail. So instead of weakening rtas_token_to_function()'s contract by removing the warnings, introduce rtas_token_to_function_untrusted(), which has no opinion on failed lookups. Convert block_rtas_call() and rtas_token_to_function() to use it. Fixes: 8252b88294d2 ("powerpc/rtas: improve function information lookups") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231212-papr-sys_rtas-vs-lockdown-v6-1-e9eafd0c8c6c@linux.ibm.com
2023-12-07powerpc/vdso: No need to undef powerpc for 64-bit buildMichael Ellerman
The vdso Makefile adds -U$(ARCH) to CPPFLAGS for the vdso64.lds linker script. ARCH is always powerpc, so it becomes -Upowerpc, which means undefine the "powerpc" symbol. But the 64-bit compiler doesn't define powerpc in the first place, compare: $ gcc-5.1.0-nolibc/powerpc64-linux/bin/powerpc64-linux-gcc -m32 -E -dM - </dev/null | grep -w powerpc #define powerpc 1 $ gcc-5.1.0-nolibc/powerpc64-linux/bin/powerpc64-linux-gcc -m64 -E -dM - </dev/null | grep -w powerpc $ So there's no need to undefine it for the 64-bit linker script. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231206115548.1466874-2-mpe@ellerman.id.au
2023-12-05powerpc/ftrace: Fix stack teardown in ftrace_no_traceNaveen N Rao
Commit 41a506ef71eb ("powerpc/ftrace: Create a dummy stackframe to fix stack unwind") added use of a new stack frame on ftrace entry to fix stack unwind. However, the commit missed updating the offset used while tearing down the ftrace stack when ftrace is disabled. Fix the same. In addition, the commit missed saving the correct stack pointer in pt_regs. Update the same. Fixes: 41a506ef71eb ("powerpc/ftrace: Create a dummy stackframe to fix stack unwind") Cc: stable@vger.kernel.org # v6.5+ Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231130065947.2188860-1-naveen@kernel.org
2023-12-01powerpc: Add PVN support for HeXin C2000 processorZhao Ke
HeXin Tech Co. has applied for a new PVN from the OpenPower Community for its new processor C2000. The OpenPower has assigned a new PVN and this newly assigned PVN is 0x0066, add pvr register related support for this PVN. Signed-off-by: Zhao Ke <ke.zhao@shingroup.cn> Link: https://discuss.openpower.foundation/t/how-to-get-a-new-pvr-for-processors-follow-power-isa/477/10 Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231129075845.57976-1-ke.zhao@shingroup.cn
2023-12-01powerpc: Fix build error due to is_valid_bugaddr()Michael Ellerman
With CONFIG_GENERIC_BUG=n the build fails with: arch/powerpc/kernel/traps.c:1442:5: error: no previous prototype for ‘is_valid_bugaddr’ [-Werror=missing-prototypes] 1442 | int is_valid_bugaddr(unsigned long addr) | ^~~~~~~~~~~~~~~~ The prototype is only defined, and the function is only needed, when CONFIG_GENERIC_BUG=y, so move the implementation under that. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231130114433.3053544-2-mpe@ellerman.id.au
2023-11-30powerpc/suspend: Add prototype for do_after_copyback()Michael Ellerman
With HIBERNATION=y the build breaks with: arch/powerpc/kernel/swsusp_64.c:14:6: error: no previous prototype for ‘do_after_copyback’ [-Werror=missing-prototypes] 14 | void do_after_copyback(void) | ^~~~~~~~~~~~~~~~~ do_after_copyback() is only called from asm, so there is no prototype, nor any header where it makes sense to place one. Just add a prototype in the C file to fix the build error. Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231129131919.2528517-1-mpe@ellerman.id.au
2023-11-29KVM: PPC: Book3S HV: Fix KVM_RUN clobbering FP/VEC user registersNicholas Piggin
Before running a guest, the host process (e.g., QEMU) FP/VEC registers are saved if they were being used, similarly to when the kernel uses FP registers. The guest values are then loaded into regs, and the host process registers will be restored lazily when it uses FP/VEC. KVM HV has a bug here: the host process registers do get saved, but the user MSR bits remain enabled, which indicates the registers are valid for the process. After they are clobbered by running the guest, this valid indication causes the host process to take on the FP/VEC register values of the guest. Fixes: 34e119c96b2b ("KVM: PPC: Book3S HV P9: Reduce mtmsrd instructions required to save host SPRs") Cc: stable@vger.kernel.org # v5.17+ Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231122025811.2973-1-npiggin@gmail.com
2023-11-28powerpc: Don't clobber f0/vs0 during fp|altivec register saveTimothy Pearson
During floating point and vector save to thread data f0/vs0 are clobbered by the FPSCR/VSCR store routine. This has been obvserved to lead to userspace register corruption and application data corruption with io-uring. Fix it by restoring f0/vs0 after FPSCR/VSCR store has completed for all the FP, altivec, VMX register save paths. Tested under QEMU in kvm mode, running on a Talos II workstation with dual POWER9 DD2.2 CPUs. Additional detail (mpe): Typically save_fpu() is called from __giveup_fpu() which saves the FP regs and also *turns off FP* in the tasks MSR, meaning the kernel will reload the FP regs from the thread struct before letting the task use FP again. So in that case save_fpu() is free to clobber f0 because the FP regs no longer hold live values for the task. There is another case though, which is the path via: sys_clone() ... copy_process() dup_task_struct() arch_dup_task_struct() flush_all_to_thread() save_all() That path saves the FP regs but leaves them live. That's meant as an optimisation for a process that's using FP/VSX and then calls fork(), leaving the regs live means the parent process doesn't have to take a fault after the fork to get its FP regs back. The optimisation was added in commit 8792468da5e1 ("powerpc: Add the ability to save FPU without giving it up"). That path does clobber f0, but f0 is volatile across function calls, and typically programs reach copy_process() from userspace via a syscall wrapper function. So in normal usage f0 being clobbered across a syscall doesn't cause visible data corruption. But there is now a new path, because io-uring can call copy_process() via create_io_thread() from the signal handling path. That's OK if the signal is handled as part of syscall return, but it's not OK if the signal is handled due to some other interrupt. That path is: interrupt_return_srr_user() interrupt_exit_user_prepare() interrupt_exit_user_prepare_main() do_notify_resume() get_signal() task_work_run() create_worker_cb() create_io_worker() copy_process() dup_task_struct() arch_dup_task_struct() flush_all_to_thread() save_all() if (tsk->thread.regs->msr & MSR_FP) save_fpu() # f0 is clobbered and potentially live in userspace Note the above discussion applies equally to save_altivec(). Fixes: 8792468da5e1 ("powerpc: Add the ability to save FPU without giving it up") Cc: stable@vger.kernel.org # v4.6+ Closes: https://lore.kernel.org/all/480932026.45576726.1699374859845.JavaMail.zimbra@raptorengineeringinc.com/ Closes: https://lore.kernel.org/linuxppc-dev/480221078.47953493.1700206777956.JavaMail.zimbra@raptorengineeringinc.com/ Tested-by: Timothy Pearson <tpearson@raptorengineering.com> Tested-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Timothy Pearson <tpearson@raptorengineering.com> [mpe: Reword change log to describe exact path of corruption & other minor tweaks] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/1921539696.48534988.1700407082933.JavaMail.zimbra@raptorengineeringinc.com
2023-11-28powerpc/rtas_pci: rename and properly expose config access APIsNathan Lynch
The rtas_read_config() and rtas_write_config() functions in kernel/rtas_pci.c have external linkage and two users in arch/powerpc: the rtas_pci code itself and the pseries platform's "enhanced error handling" (EEH) support code. The prototypes for these functions in asm/ppc-pci.h have until now been guarded by CONFIG_EEH since the only external caller is the pseries EEH code. However, this presumably has always generated warnings when built with !CONFIG_EEH and -Wmissing-prototypes: arch/powerpc/kernel/rtas_pci.c:46:5: error: no previous prototype for function 'rtas_read_config' [-Werror,-Wmissing-prototypes] 46 | int rtas_read_config(struct pci_dn *pdn, int where, int size, u32 *val) arch/powerpc/kernel/rtas_pci.c:98:5: error: no previous prototype for function 'rtas_write_config' [-Werror,-Wmissing-prototypes] 98 | int rtas_write_config(struct pci_dn *pdn, int where, int size, u32 val) The introduction of commit c6345dfa6e3e ("Makefile.extrawarn: turn on missing-prototypes globally") forces the issue. The efika and chrp platform code have (static) functions with the same names but different signatures. We may as well eliminate the potential for conflicts and confusion by renaming the globally visible versions as their prototypes get moved out of the CONFIG_EEH-guarded region; their current names are too generic anyway. Since they operate on objects of the type 'struct pci_dn *', give them the slightly more verbose prefix "rtas_pci_dn_" and fix up all the call sites. Fixes: c6345dfa6e3e ("Makefile.extrawarn: turn on missing-prototypes globally") Reported-by: Linux Kernel Functional Testing <lkft@linaro.org> Closes: https://lore.kernel.org/linuxppc-dev/CA+G9fYt0LLXtjSz+Hkf3Fhm-kf0ZQanrhUS+zVZGa3O+Wt2+vg@mail.gmail.com/ Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231127-rtas-pci-rw-config-v1-1-385d29ace3df@linux.ibm.com
2023-11-27powerpc: Remove orphaned reg_a2.hMichael Ellerman
Commit fb5a515704d7 ("powerpc: Remove platforms/wsp and associated pieces") removed the A2 CPU support, but missed removal of reg_a2.h. None of the defines contained in it are used, with the exception of the SPRN_TEN* values, but they are also defined in reg_booke.h. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231113043947.1931831-1-mpe@ellerman.id.au
2023-11-27powerpc: Make cpu_spec __ro_after_initMichael Ellerman
The cpu_spec is a struct holding various information about the CPU the kernel is executing on. It's populated early in boot and must not change after that. In particular the cpu_features and mmu_features hold the set of discovered CPU/MMU features and are used to set static keys for each feature, and do binary patching of assembly. So any change to the cpu_features/mmu_features later in boot will not be reflected in the state of the static keys or patched code. There is already logic to check that cpu_features/mmu_features don't change, see check_features() in feature-fixups.c. But as another layer of protection the entire cpu_spec should be read only after init, annotate it as such. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231025012452.1985680-1-mpe@ellerman.id.au
2023-11-21powerpc/rtas: Remove trailing spaceNathan Lynch
Use scripts/cleanfile to remove instances of trailing space in the core RTAS code and header. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231106-rtas-trivial-v1-6-61847655c51f@linux.ibm.com
2023-11-21powerpc/rtas: Remove unused rtas_service_present()Nathan Lynch
rtas_service_present() has no more users. rtas_function_implemented() is now the appropriate API for determining whether a given RTAS function is available to call. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231106-rtas-trivial-v1-4-61847655c51f@linux.ibm.com
2023-11-12LSM: wireup Linux Security Module syscallsCasey Schaufler
Wireup lsm_get_self_attr, lsm_set_self_attr and lsm_list_modules system calls. Signed-off-by: Casey Schaufler <casey@schaufler-ca.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Cc: linux-api@vger.kernel.org Reviewed-by: Mickaël Salaün <mic@digikod.net> [PM: forward ported beyond v6.6 due merge window changes] Signed-off-by: Paul Moore <paul@paul-moore.com>