summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel
AgeCommit message (Collapse)Author
2021-02-09powerpc/64: move account_stolen_time into its own functionNicholas Piggin
This will be used by interrupt entry as well. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-38-npiggin@gmail.com
2021-02-09powerpc/64s: reconcile interrupts in CNicholas Piggin
There is no need for this to be in asm, use the new intrrupt entry wrapper. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-37-npiggin@gmail.com
2021-02-09powerpc/64s: move context tracking exit to interrupt exit pathNicholas Piggin
The interrupt handler wrapper functions are not the ideal place to maintain context tracking because after they return, the low level exit code must then determine if there are interrupts to replay, or if the task should be preempted, etc. Those paths (e.g., schedule_user) include their own exception_enter/exit pairs to fix this up but it's a bit hacky (see schedule_user() comments). Ideally context tracking will go to user mode only when there are no more interrupts or context switches or other exit processing work to handle. 64e can not do this because it does not use the C interrupt exit code. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-36-npiggin@gmail.com
2021-02-09powerpc: handle irq_enter/irq_exit in interrupt handler wrappersNicholas Piggin
Move irq_enter/irq_exit into asynchronous interrupt handler wrappers. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-35-npiggin@gmail.com
2021-02-09powerpc/64: context tracking move to interrupt wrappersNicholas Piggin
This moves exception_enter/exit calls to wrapper functions for synchronous interrupts. More interrupt handlers are covered by this than previously. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-33-npiggin@gmail.com
2021-02-09powerpc/64: context tracking remove _TIF_NOHZNicholas Piggin
Add context tracking to the system call handler explicitly, and remove _TIF_NOHZ. This improves system call performance when nohz_full is enabled. On a POWER9, gettid scv system call cost on a nohz_full CPU improves from 1129 cycles to 1004 cycles and on a housekeeping CPU from 550 cycles to 430 cycles. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-31-npiggin@gmail.com
2021-02-09powerpc: add interrupt_cond_local_irq_enable helperNicholas Piggin
Simple helper for synchronous interrupt handlers (i.e., process-context) to enable interrupts if it was taken in an interrupts-enabled context. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-30-npiggin@gmail.com
2021-02-09powerpc: convert interrupt handlers to use wrappersNicholas Piggin
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-29-npiggin@gmail.com
2021-02-09powerpc/traps: factor common code from program check and emulation assistNicholas Piggin
Move the program check handling into a function called by both, rather than have the emulation assist handler call the program check handler. This allows each of these handlers to be implemented with "interrupt wrappers" in a later change. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1612702475.d6qyt6qtfy.astroid@bobo.none
2021-02-09powerpc: improve handling of unrecoverable system resetNicholas Piggin
If an unrecoverable system reset hits in process context, the system does not have to panic. Similar to machine check, call nmi_exit() before die(). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-26-npiggin@gmail.com
2021-02-09powerpc/mce: ensure machine check handler always tests RINicholas Piggin
A machine check that is handled must still check MSR[RI] for recoverability of the interrupted context. Without this patch it's possible for a handled machine check to return to a context where it has clobbered live registers. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-25-npiggin@gmail.com
2021-02-09powerpc: introduce die_mceNicholas Piggin
As explained by commit daf00ae71dad ("powerpc/traps: restore recoverability of machine_check interrupts"), die() can't be called from within nmi_enter to nicely kill a process context that was interrupted. nmi_exit must be called first. This adds a function die_mce which takes care of this for machine check handlers. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-24-npiggin@gmail.com
2021-02-09powerpc: add and use unknown_async_exceptionNicholas Piggin
This is currently the same as unknown_exception, but it will diverge after interrupt wrappers are added and code moved out of asm into the wrappers (e.g., async handlers will check FINISH_NAP). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-22-npiggin@gmail.com
2021-02-09powerpc/perf: move perf irq/nmi handling details into traps.cNicholas Piggin
This is required in order to allow more significant differences between NMI type interrupt handlers and regular asynchronous handlers. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-20-npiggin@gmail.com
2021-02-09powerpc/traps: add NOKPROBE_SYMBOL for sreset and mceNicholas Piggin
These NMIs could fire any time including inside kprobe code, so exclude them from kprobes. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-19-npiggin@gmail.com
2021-02-09powerpc/64s: move bad_page_fault handling to CNicholas Piggin
This simplifies code, and it is also useful when introducing interrupt handler wrappers when introducing wrapper functionality that doesn't cope with asm entry code calling into more than one handler function. 32-bit and 64e still have some such cases, which limits some ways they can use interrupt wrappers. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-15-npiggin@gmail.com
2021-02-09powerpc/64s: add do_bad_page_fault_segv handlerNicholas Piggin
This function acts like an interrupt handler so it needs to follow the standard interrupt handler function signature which will be introduced in a future change. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-13-npiggin@gmail.com
2021-02-09powerpc: bad_page_fault get registers from regsNicholas Piggin
Similar to the previous patch this makes interrupt handler function types more regular so they can be wrapped with the next patch. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-12-npiggin@gmail.com
2021-02-09powerpc/32: transfer can avoid saving r4/r5 over trace callNicholas Piggin
Now that handlers get all registers from pt_regs, r4 and r5 are no longer live here and may be clobbered. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-11-npiggin@gmail.com
2021-02-09powerpc: DebugException remove argsNicholas Piggin
Like other interrupt handler conversions, switch to getting registers from the pt_regs argument. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-10-npiggin@gmail.com
2021-02-09powerpc: do_break get registers from regsNicholas Piggin
Similar to the previous patch this makes interrupt handler function types more regular so they can be wrapped with the next patch. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-9-npiggin@gmail.com
2021-02-09powerpc/fsl_booke/32: CacheLockingException remove argsNicholas Piggin
Like other interrupt handler conversions, switch to getting registers from the pt_regs argument. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-8-npiggin@gmail.com
2021-02-09powerpc: remove arguments from fault handler functionsNicholas Piggin
Make mm fault handlers all just take the pt_regs * argument and load DAR/DSISR from that. Make those that return a value return long. This is done to make the function signatures match other handlers, which will help with a future patch to add wrappers. Explicit arguments could be added for performance but that would require more wrapper macro variants. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-7-npiggin@gmail.com
2021-02-09powerpc/64s: move the hash fault handling logic to CNicholas Piggin
The fault handling still has some complex logic particularly around hash table handling, in asm. Implement most of this in C. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-6-npiggin@gmail.com
2021-02-09powerpc/64s: move DABR match out of handle_page_faultNicholas Piggin
Similar to the 32/s change, move the test and call to the do_break handler to the DSI. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-5-npiggin@gmail.com
2021-02-09powerpc/32s: move DABR match out of handle_page_faultChristophe Leroy
handle_page_fault() has some code dedicated to book3s/32 to call do_break() when the DSI is a DABR match. On other platforms, do_break() is handled separately. Do the same for book3s/32, do it earlier in the process of DSI. This change also avoid doing the test on ISI. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-4-npiggin@gmail.com
2021-02-09powerpc/64s: interrupt exit improve bounding of interrupt recursionNicholas Piggin
When replaying pending soft-masked interrupts when an interrupt returns to an irqs-enabled context, there is a special case required if this was an asynchronous interrupt to avoid unbounded interrupt recursion. This case was not tested for in the case the asynchronous interrupt hit in user context, because a subsequent nested interrupt would by definition hit in kernel mode, which then exits via the kernel path which does test this case. There is no reason to allow this for such interrupts. While recursion is bounded at the next level, it's simpler and uses less stack to apply the replay logic consistently. This also expands the comment which was really pretty poor and didn't explain the problem (I can say that because I wrote it). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210130130852.2952424-2-npiggin@gmail.com
2021-02-09powerpc/pci: Move PHB discovery for PCI_DN using platformsOliver O'Halloran
Make powernv, pseries, powermac and maple use ppc_mc.discover_phbs. These platforms need to be done together because they all depend on pci_dn's being created from the DT. The pci_dn contains a pointer to the relevant pci_controller so they need to be created after the pci_controller structures are available, but before PCI devices are scanned. Currently this ordering is provided by initcalls and the sequence is: 1. PHBs are discovered (setup_arch) (early boot, pre-initcalls) 2. pci_dn are created from the unflattended DT (core initcall) 3. PHBs are scanned pcibios_init() (subsys initcall) The new ppc_md.discover_phbs() function is also a core_initcall so we can't guarantee ordering between the creation of pci_controllers and the creation of pci_dn's which require a pci_controller. We could use the postcore, or core_sync initcall levels, but it's cleaner to just move the pci_dn setup into the per-PHB inits which occur inside of .discover_phb() for these platforms. This brings the boot-time path in line with the PHB hotplug path that is used for pseries DLPAR operations too. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> [mpe: Squash powermac & maple in to avoid breakage those platforms, convert memblock allocs to use kmalloc to avoid warnings] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-2-oohall@gmail.com
2021-02-03powerpc/pci: Add ppc_md.discover_phbs()Oliver O'Halloran
On many powerpc platforms the discovery and initalisation of pci_controllers (PHBs) happens inside of setup_arch(). This is very early in boot (pre-initcalls) and means that we're initialising the PHB long before many basic kernel services (slab allocator, debugfs, a real ioremap) are available. On PowerNV this causes an additional problem since we map the PHB registers with ioremap(). As of commit d538aadc2718 ("powerpc/ioremap: warn on early use of ioremap()") a warning is printed because we're using the "incorrect" API to setup and MMIO mapping in searly boot. The kernel does provide early_ioremap(), but that is not intended to create long-lived MMIO mappings and a seperate warning is printed by generic code if early_ioremap() mappings are "leaked." This is all fixable with dumb hacks like using early_ioremap() to setup the initial mapping then replacing it with a real ioremap later on in boot, but it does raise the question: Why the hell are we setting up the PHB's this early in boot? The old and wise claim it's due to "hysterical rasins." Aside from amused grapes there doesn't appear to be any real reason to maintain the current behaviour. Already most of the newer embedded platforms perform PHB discovery in an arch_initcall and between the end of setup_arch() and the start of initcalls none of the generic kernel code does anything PCI related. On powerpc scanning PHBs occurs in a subsys_initcall so it should be possible to move the PHB discovery to a core, postcore or arch initcall. This patch adds the ppc_md.discover_phbs hook and a core_initcall stub that calls it. The core_initcalls are the earliest to be called so this will any possibly issues with dependency between initcalls. This isn't just an academic issue either since on pseries and PowerNV EEH init occurs in an arch_initcall and depends on the pci_controllers being available, similarly the creation of pci_dns occurs at core_initcall_sync (i.e. between core and postcore initcalls). These problems need to be addressed seperately. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> [mpe: Make discover_phbs() static] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103043523.916109-1-oohall@gmail.com
2021-02-02powerpc/64/signal: Fix regression in __kernel_sigtramp_rt64() semanticsRaoni Fassina Firmino
Commit 0138ba5783ae ("powerpc/64/signal: Balance return predictor stack in signal trampoline") changed __kernel_sigtramp_rt64() VDSO and trampoline code, and introduced a regression in the way glibc's backtrace()[1] detects the signal-handler stack frame. Apart from the practical implications, __kernel_sigtramp_rt64() was a VDSO function with the semantics that it is a function you can call from userspace to end a signal handling. Now this semantics are no longer valid. I believe the aforementioned change affects all releases since 5.9. This patch tries to fix both the semantics and practical aspect of __kernel_sigtramp_rt64() returning it to the previous code, whilst keeping the intended behaviour of 0138ba5783ae by adding a new symbol to serve as the jump target from the kernel to the trampoline. Now the trampoline has two parts, a new entry point and the old return point. [1] https://lists.ozlabs.org/pipermail/linuxppc-dev/2021-January/223194.html Fixes: 0138ba5783ae ("powerpc/64/signal: Balance return predictor stack in signal trampoline") Cc: stable@vger.kernel.org # v5.9+ Signed-off-by: Raoni Fassina Firmino <raoni@linux.ibm.com> Acked-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Minor tweaks to change log formatting, add stable tag] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210201200505.iz46ubcizipnkcxe@work-tp
2021-01-31powerpc/32s: Only build hash code when CONFIG_PPC_BOOK3S_604 is selectedChristophe Leroy
It is now possible to only build book3s/32 kernel for CPUs without hash table. Opt out hash related code when CONFIG_PPC_BOOK3S_604 is not selected. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/62df436454ef06e104cc334a0859a2878d7888d5.1608274548.git.christophe.leroy@csgroup.eu
2021-01-31powerpc/setup: Adjust six seq_printf() calls in show_cpuinfo()Markus Elfring
A bit of information should be put into a sequence. Thus improve the execution speed for this data output by better usage of corresponding functions. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5b62379e-a35f-4f56-f1b5-6350f76007e7@web.de
2021-01-31powerpc/mce: Remove per cpu variables from MCE handlersGanesh Goudar
Access to per-cpu variables requires translation to be enabled on pseries machine running in hash mmu mode, Since part of MCE handler runs in realmode and part of MCE handling code is shared between ppc architectures pseries and powernv, it becomes difficult to manage these variables differently on different architectures, So have these variables in paca instead of having them as per-cpu variables to avoid complications. Signed-off-by: Ganesh Goudar <ganeshgr@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210128104143.70668-2-ganeshgr@linux.ibm.com
2021-01-31powerpc/time: Enable sched clock for irqtimePingfan Liu
When CONFIG_IRQ_TIME_ACCOUNTING and CONFIG_VIRT_CPU_ACCOUNTING_GEN, powerpc does not enable "sched_clock_irqtime" and can not utilize irq time accounting. Like x86, powerpc does not use the sched_clock_register() interface. So it needs an dedicated call to enable_sched_clock_irqtime() to enable irq time accounting. Fixes: 518470fe962e ("powerpc: Add HAVE_IRQ_TIME_ACCOUNTING") Signed-off-by: Pingfan Liu <kernelfans@gmail.com> [mpe: Add fixes tag] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1603349479-26185-1-git-send-email-kernelfans@gmail.com
2021-01-31powerpc/prom: Fix "ibm,arch-vec-5-platform-support" scanCédric Le Goater
The "ibm,arch-vec-5-platform-support" property is a list of pairs of bytes representing the options and values supported by the platform firmware. At boot time, Linux scans this list and activates the available features it recognizes : Radix and XIVE. A recent change modified the number of entries to loop on and 8 bytes, 4 pairs of { options, values } entries are always scanned. This is fine on KVM but not on PowerVM which can advertises less. As a consequence on this platform, Linux reads extra entries pointing to random data, interprets these as available features and tries to activate them, leading to a firmware crash in ibm,client-architecture-support. Fix that by using the property length of "ibm,arch-vec-5-platform-support". Fixes: ab91239942a9 ("powerpc/prom: Remove VLA in prom_check_platform_support()") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210122075029.797013-1-clg@kaod.org
2021-01-31powerpc/pci: Delete traverse_pci_dn()Oliver O'Halloran
Nothing uses it. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200902035121.1762475-1-oohall@gmail.com
2021-01-31powerpc/eeh: Add a debugfs interface to check if a driver supports recoveryOliver O'Halloran
If a PCI device's current driver implements the error handling callbacks EEH can use them to recover the device after an error occurs. For devices without the error handling callbacks we recover them by removing the device and re-scanning it so the PCI core puts the device back into a known good state. Currently there's no way for userspace to determine if the driver supports recovery or not which makes it difficult to write automated tests for EEH. This patch addressing that by adding a debugfs interface for querying if a specific device can be recovered or not. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103051512.919333-2-oohall@gmail.com
2021-01-31powerpc/eeh: Rework pci_dev lookup in debugfs attributesOliver O'Halloran
Pull the string -> pci_dev lookup stuff into a helper function. No functional change. Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201103051512.919333-1-oohall@gmail.com
2021-01-30powerpc/vdso64: remove meaningless vgettimeofday.o build ruleMasahiro Yamada
VDSO64 is only built for the 64-bit kernel, hence vgettimeofday.o is built by the generic rule in scripts/Makefile.build. This line does not provide anything useful. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201223171142.707053-2-masahiroy@kernel.org
2021-01-30powerpc/vdso: fix unnecessary rebuilds of vgettimeofday.oMasahiro Yamada
vgettimeofday.o is unnecessarily rebuilt. Adding it to 'targets' is not enough to fix the issue. Kbuild is correctly rebuilding it because the command line is changed. PowerPC builds each vdso directory twice; first in vdso_prepare to generate vdso{32,64}-offsets.h, second as part of the ordinary build process to embed vdso{32,64}.so.dbg into the kernel. The problem shows up when CONFIG_PPC_WERROR=y due to the following line in arch/powerpc/Kbuild: subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror In the preparation stage, Kbuild directly visits the vdso directories, hence it does not inherit subdir-ccflags-y. In the second descend, Kbuild adds -Werror, which results in the command line flipping with/without -Werror. It implies a potential danger; if a more critical flag that would impact the resulted vdso, the offsets recorded in the headers might be different from real offsets in the embedded vdso images. Removing the unneeded second descend solves the problem. Reported-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/linuxppc-dev/87tuslxhry.fsf@mpe.ellerman.id.au/ Link: https://lore.kernel.org/r/20201223171142.707053-1-masahiroy@kernel.org
2021-01-30powerpc/iommu/debug: Add debugfs entries for IOMMU tablesAlexey Kardashevskiy
This adds a folder per LIOBN under /sys/kernel/debug/iommu with IOMMU table parameters. This is enabled by CONFIG_IOMMU_DEBUGFS. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210113102014.124452-1-aik@ozlabs.ru
2021-01-30powerpc/watchdog: Declare soft_nmi_interrupt() prototypeCédric Le Goater
soft_nmi_interrupt() usage requires PPC_WATCHDOG to be configured. Check the CONFIG definition to declare the prototype. It fixes this W=1 compile error : ../arch/powerpc/kernel/watchdog.c:250:6: error: no previous prototype for ‘soft_nmi_interrupt’ [-Werror=missing-prototypes] 250 | void soft_nmi_interrupt(struct pt_regs *regs) | ^~~~~~~~~~~~~~~~~~ Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210104143206.695198-18-clg@kaod.org
2021-01-30powerpc/optprobes: Make patch_imm64_load_insns() staticCédric Le Goater
patch_imm64_load_insns() is only used locally in arch_prepare_optimized_kprobe() and does not need to be external. It fixes this W=1 compile error : ../arch/powerpc/kernel/optprobes.c:149:6: error: no previous prototype for ‘patch_imm64_load_insns’ [-Werror=missing-prototypes] 149 | void patch_imm64_load_insns(unsigned int val, kprobe_opcode_t *addr) Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210104143206.695198-12-clg@kaod.org
2021-01-30powerpc/optprobes: Remove unused routine patch_imm32_load_insns()Cédric Le Goater
Commit 650b55b707fd ("powerpc: Add prefixed instructions to instruction data type") removed the use of patch_imm32_load_insns(). Clean it up to fix this W=1 compile error : ../arch/powerpc/kernel/optprobes.c:149:6: error: no previous prototype for ‘patch_imm32_load_insns’ [-Werror=missing-prototypes] 149 | void patch_imm32_load_insns(unsigned int val, kprobe_opcode_t *addr) Fixes: 650b55b707fd ("powerpc: Add prefixed instructions to instruction data type") Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210104143206.695198-11-clg@kaod.org
2021-01-30powerpc/smp: Make debugger_ipi_callback() staticCédric Le Goater
debugger_ipi_callback() is a local routine used as a NMI IPI handler and does not need to be external. It fixes this W=1 compile error : ../arch/powerpc/kernel/smp.c:579:6: error: no previous prototype for ‘debugger_ipi_callback’ [-Werror=missing-prototypes] 579 | void debugger_ipi_callback(struct pt_regs *regs) | ^~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210104143206.695198-10-clg@kaod.org
2021-01-30powerpc/smp: Include tick_broadcast() prototypeCédric Le Goater
It fixes this W=1 compile error : ../arch/powerpc/kernel/smp.c:569:6: error: no previous prototype for ‘tick_broadcast’ [-Werror=missing-prototypes] 569 | void tick_broadcast(const struct cpumask *mask) | ^~~~~~~~~~~~~~ Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210104143206.695198-9-clg@kaod.org
2021-01-30powerpc/mce: Include prototypesCédric Le Goater
It fixes these W=1 compile errors : ../arch/powerpc/kernel/mce.c:591:14: error: no previous prototype for ‘machine_check_early’ [-Werror=missing-prototypes] 591 | long notrace machine_check_early(struct pt_regs *regs) | ^~~~~~~~~~~~~~~~~~~ ../arch/powerpc/kernel/mce.c:725:6: error: no previous prototype for ‘hmi_exception_realmode’ [-Werror=missing-prototypes] 725 | long hmi_exception_realmode(struct pt_regs *regs) | ^~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210104143206.695198-8-clg@kaod.org
2021-01-30powerpc/setup_64: Make some routines staticCédric Le Goater
The following routines are only called by local services and do not need to be external symbols. It fixes these W=1 errors : ../arch/powerpc/kernel/setup_64.c:261:13: error: no previous prototype for ‘record_spr_defaults’ [-Werror=missing-prototypes] 261 | void __init record_spr_defaults(void) | ^~~~~~~~~~~~~~~~~~~ ../arch/powerpc/kernel/setup_64.c:1011:6: error: no previous prototype for ‘entry_flush_enable’ [-Werror=missing-prototypes] 1011 | void entry_flush_enable(bool enable) | ^~~~~~~~~~~~~~~~~~ ../arch/powerpc/kernel/setup_64.c:1023:6: error: no previous prototype for ‘uaccess_flush_enable’ [-Werror=missing-prototypes] 1023 | void uaccess_flush_enable(bool enable) | ^~~~~~~~~~~~~~~~~~~~ Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210104143206.695198-7-clg@kaod.org
2021-01-29arch: powerpc: Stop building and using oprofileViresh Kumar
The "oprofile" user-space tools don't use the kernel OPROFILE support any more, and haven't in a long time. User-space has been converted to the perf interfaces. This commits stops building oprofile for powerpc and removes any reference to it from directories in arch/powerpc/ apart from arch/powerpc/oprofile, which will be removed in the next commit (this is broken into two commits as the size of the commit became very big, ~5k lines). Note that the member "oprofile_cpu_type" in "struct cpu_spec" isn't removed as it was also used by other parts of the code. Suggested-by: Christoph Hellwig <hch@infradead.org> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Robert Richter <rric@kernel.org> Acked-by: William Cohen <wcohen@redhat.com> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Thomas Gleixner <tglx@linutronix.de>
2021-01-24fs: add mount_setattr()Christian Brauner
This implements the missing mount_setattr() syscall. While the new mount api allows to change the properties of a superblock there is currently no way to change the properties of a mount or a mount tree using file descriptors which the new mount api is based on. In addition the old mount api has the restriction that mount options cannot be applied recursively. This hasn't changed since changing mount options on a per-mount basis was implemented in [1] and has been a frequent request not just for convenience but also for security reasons. The legacy mount syscall is unable to accommodate this behavior without introducing a whole new set of flags because MS_REC | MS_REMOUNT | MS_BIND | MS_RDONLY | MS_NOEXEC | [...] only apply the mount option to the topmost mount. Changing MS_REC to apply to the whole mount tree would mean introducing a significant uapi change and would likely cause significant regressions. The new mount_setattr() syscall allows to recursively clear and set mount options in one shot. Multiple calls to change mount options requesting the same changes are idempotent: int mount_setattr(int dfd, const char *path, unsigned flags, struct mount_attr *uattr, size_t usize); Flags to modify path resolution behavior are specified in the @flags argument. Currently, AT_EMPTY_PATH, AT_RECURSIVE, AT_SYMLINK_NOFOLLOW, and AT_NO_AUTOMOUNT are supported. If useful, additional lookup flags to restrict path resolution as introduced with openat2() might be supported in the future. The mount_setattr() syscall can be expected to grow over time and is designed with extensibility in mind. It follows the extensible syscall pattern we have used with other syscalls such as openat2(), clone3(), sched_{set,get}attr(), and others. The set of mount options is passed in the uapi struct mount_attr which currently has the following layout: struct mount_attr { __u64 attr_set; __u64 attr_clr; __u64 propagation; __u64 userns_fd; }; The @attr_set and @attr_clr members are used to clear and set mount options. This way a user can e.g. request that a set of flags is to be raised such as turning mounts readonly by raising MOUNT_ATTR_RDONLY in @attr_set while at the same time requesting that another set of flags is to be lowered such as removing noexec from a mount tree by specifying MOUNT_ATTR_NOEXEC in @attr_clr. Note, since the MOUNT_ATTR_<atime> values are an enum starting from 0, not a bitmap, users wanting to transition to a different atime setting cannot simply specify the atime setting in @attr_set, but must also specify MOUNT_ATTR__ATIME in the @attr_clr field. So we ensure that MOUNT_ATTR__ATIME can't be partially set in @attr_clr and that @attr_set can't have any atime bits set if MOUNT_ATTR__ATIME isn't set in @attr_clr. The @propagation field lets callers specify the propagation type of a mount tree. Propagation is a single property that has four different settings and as such is not really a flag argument but an enum. Specifically, it would be unclear what setting and clearing propagation settings in combination would amount to. The legacy mount() syscall thus forbids the combination of multiple propagation settings too. The goal is to keep the semantics of mount propagation somewhat simple as they are overly complex as it is. The @userns_fd field lets user specify a user namespace whose idmapping becomes the idmapping of the mount. This is implemented and explained in detail in the next patch. [1]: commit 2e4b7fcd9260 ("[PATCH] r/o bind mounts: honor mount writer counts at remount") Link: https://lore.kernel.org/r/20210121131959.646623-35-christian.brauner@ubuntu.com Cc: David Howells <dhowells@redhat.com> Cc: Aleksa Sarai <cyphar@cyphar.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: linux-fsdevel@vger.kernel.org Cc: linux-api@vger.kernel.org Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>