summaryrefslogtreecommitdiff
path: root/arch/powerpc
AgeCommit message (Collapse)Author
2018-05-10powerpc/xive: fix hcall H_INT_RESET to support long busy delaysCédric Le Goater
The hcall H_INT_RESET can take some time to complete and in such cases it returns H_LONG_BUSY_* codes requiring the machine to sleep for a while before retrying. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-10powerpc/64/kexec: fix race in kexec when XIVE is shutdownCédric Le Goater
The kexec_state KEXEC_STATE_IRQS_OFF barrier is reached by all secondary CPUs before the kexec_cpu_down() operation is called on secondaries. This can raise conflicts and provoque errors in the XIVE hcalls when XIVE is shutdown with H_INT_RESET on the primary CPU. To synchronize the kexec_cpu_down() operations and make sure the secondaries have completed their task before the primary starts doing the same, let's move the primary kexec_cpu_down() after the KEXEC_STATE_REAL_MODE barrier. This change of the ending sequence of kexec is mostly useful on the pseries platform but it impacts also the powernv, ps3 and 85xx platforms. powernv can be easily tested and fixed but some caution is required for the other two. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-10powerpc/config: powernv_defconfig updatesNicholas Piggin
For consideration: * Add NVDIMM support - Enables greater testing, mambo device. * Add IPv6 support built in + additional modules - Because it's 2018 maan. * Add DEFERRED_STRUCT_PAGE_INIT - Let's see what breaks. * Add PPC_MEMTRACE - Small powernv debugfs driver for getting hardware traces. * Add MEMORY_FAILURE - Machine check exceptions can now drive memory failure. * Turn on FANOTIFY - This is the current filesystem notification feature. * Turn on SCOM_DEBUGFS - Handy for hardware/firmware debugging, security risk? * Turn on async SCSI scanning - Let's see what breaks. * Add MLX5 driver as a module - Popular demand. * Add CRYPTO_CRCT10DIF_VPMSUM - POWER8 T10DIF acceleration. * Make a bunch of USB hid drivers modules. * Make SCSI SG, SR, and FC modules - FC is huge. * Make video drivers except AST GPU modules - Also huge. * Make PCI serial driver a module - Uncommon. * Make more things modules, NFS FS, RAM disk, netconsole, MS-DOS fs. * Get rid of /dev/port - Not used. * Remove PPS and PTP subsystms - Unusual. * Remove legacy BSD ttys - Long dead. * Remove IDE - Deprecated and replaced with ATA. * Remove WIRELESS - Until we get POWER9 laptops. * Remove RAW - Long deprecated in favour of direct IO. * Remove floppy, parport, and PS2 input devices - not supported. * Remove virtio drivers, ballooning - We're host only. * Remove PPP - Sorry Paulus. This results in a significantly smaller vmlinux: text data bss dec filename 13143383 5277944 1317856 19739183 vanilla 12263281 4852074 1341720 18457075 patched Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-10powerpc: wii_defconfig: Disable BCMA supportJonathan Neuschäfer
The B43 driver only needs CONFIG_SSB to support the WLAN card found in the Wii. Configure it accordingly, and disable BCMA bus support to save a bit of space. Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-10powerpc: wii_defconfig: Enable Wii SDHCI driverJonathan Neuschäfer
This allows access to the SD card and the BCM4318 Wifi module. Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-10powerpc: wii_defconfig: Enable GPIO-related optionsJonathan Neuschäfer
Now that there's a GPIO driver for the Wii, let's enable the following drivers: - the GPIO driver itself - gpio-keys - gpio-poweroff - gpio-leds and a few LED triggers Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-10powerpc: wii_defconfig: Disable Ethernet driver support codeJonathan Neuschäfer
The Wii doesn't have built-in Ethernet and USB Ethernet adapters are in a different menu. Disable CONFIG_ETHERNET to save some space in support code for Ethernet drivers. Note that this patch doesn't disable any Ethernet drivers, because they are not enabled by default. Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-10powerpc/watchdog: fix typo 'can by' to 'can be'Wolfram Sang
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-10powerpc/pseries: hcall_exit tracepoint retval should be signedMichael Ellerman
The hcall_exit() tracepoint has retval defined as unsigned long. That leads to humours results like: bash-3686 [009] d..2 854.134094: hcall_entry: opcode=24 bash-3686 [009] d..2 854.134095: hcall_exit: opcode=24 retval=18446744073709551609 It's normal for some hcalls to return negative values, displaying them as unsigned isn't very helpful. So change it to signed. bash-3711 [001] d..2 471.691008: hcall_entry: opcode=24 bash-3711 [001] d..2 471.691008: hcall_exit: opcode=24 retval=-7 Which can be more easily compared to H_NOT_FOUND in hvcall.h Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
2018-05-10powerpc/pkeys: Drop private VM_PKEY definitionsMichael Ellerman
Now that we've updated the generic headers to support 5 PKEY bits for powerpc we don't need our own #defines in arch code. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-09swiotlb: move the SWIOTLB config symbol to lib/KconfigChristoph Hellwig
This way we have one central definition of it, and user can select it as needed. The new option is not user visible, which is the behavior it had in most architectures, with a few notable exceptions: - On x86_64 and mips/loongson3 it used to be user selectable, but defaulted to y. It now is unconditional, which seems like the right thing for 64-bit architectures without guaranteed availablity of IOMMUs. - on powerpc the symbol is user selectable and defaults to n, but many boards select it. This change assumes no working setup required a manual selection, but if that turned out to be wrong we'll have to add another select statement or two for the respective boards. Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-09arch: define the ARCH_DMA_ADDR_T_64BIT config symbol in lib/KconfigChristoph Hellwig
Define this symbol if the architecture either uses 64-bit pointers or the PHYS_ADDR_T_64BIT is set. This covers 95% of the old arch magic. We only need an additional select for Xen on ARM (why anyway?), and we now always set ARCH_DMA_ADDR_T_64BIT on mips boards with 64-bit physical addressing instead of only doing it when highmem is set. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: James Hogan <jhogan@kernel.org>
2018-05-09arch: remove the ARCH_PHYS_ADDR_T_64BIT config symbolChristoph Hellwig
Instead select the PHYS_ADDR_T_64BIT for 32-bit architectures that need a 64-bit phys_addr_t type directly. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: James Hogan <jhogan@kernel.org>
2018-05-09scatterlist: move the NEED_SG_DMA_LENGTH config symbol to lib/KconfigChristoph Hellwig
This way we have one central definition of it, and user can select it as needed. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
2018-05-09iommu-helper: move the IOMMU_HELPER config symbol to lib/Christoph Hellwig
This way we have one central definition of it, and user can select it as needed. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
2018-05-09iommu-helper: mark iommu_is_span_boundary as inlineChristoph Hellwig
This avoids selecting IOMMU_HELPER just for this function. And we only use it once or twice in normal builds so this often even is a size reduction. Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-05-09mm/pkeys, powerpc, x86: Provide an empty vma_pkey() in linux/pkeys.hMichael Ellerman
Consolidate the pkey handling by providing a common empty definition of vma_pkey() in pkeys.h when CONFIG_ARCH_HAS_PKEYS=n. This also removes another entanglement of pkeys.h and asm/mmu_context.h. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Ram Pai <linuxram@us.ibm.com> Reviewed-by: Dave Hansen <dave.hansen@intel.com>
2018-05-09mm, powerpc, x86: define VM_PKEY_BITx bits if CONFIG_ARCH_HAS_PKEYS is enabledRam Pai
VM_PKEY_BITx are defined only if CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS is enabled. Powerpc also needs these bits. Hence lets define the VM_PKEY_BITx bits for any architecture that enables CONFIG_ARCH_HAS_PKEYS. Reviewed-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Ram Pai <linuxram@us.ibm.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-08dma-debug: remove CONFIG_HAVE_DMA_API_DEBUGChristoph Hellwig
There is no arch specific code required for dma-debug, so there is no need to opt into the support either. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
2018-05-08dma-debug: move initialization to common codeChristoph Hellwig
Most mainstream architectures are using 65536 entries, so lets stick to that. If someone is really desperate to override it that can still be done through <asm/dma-mapping.h>, but I'd rather see a really good rationale for that. dma_debug_init is now called as a core_initcall, which for many architectures means much earlier, and provides dma-debug functionality earlier in the boot process. This should be safe as it only relies on the memory allocator already being available. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
2018-05-08powerpc/pseries: Fix CONFIG_NUMA=n buildMichael Ellerman
The build is failing with CONFIG_NUMA=n and some compiler versions: arch/powerpc/platforms/pseries/hotplug-cpu.o: In function `dlpar_online_cpu': hotplug-cpu.c:(.text+0x12c): undefined reference to `timed_topology_update' arch/powerpc/platforms/pseries/hotplug-cpu.o: In function `dlpar_cpu_remove': hotplug-cpu.c:(.text+0x400): undefined reference to `timed_topology_update' Fix it by moving the empty version of timed_topology_update() into the existing #ifdef block, which has the right guard of SPLPAR && NUMA. Fixes: cee5405da402 ("powerpc/hotplug: Improve responsiveness of hotplug change") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Minor conflict, a CHECK was placed into an if() statement in net-next, whilst a newline was added to that CHECK call in 'net'. Thanks to Daniel for the merge resolution. Signed-off-by: David S. Miller <davem@davemloft.net>
2018-05-07powerpc/trace/syscalls: Update syscall name matching logic to account for ↵Naveen N. Rao
ppc_ prefix Some syscall entry functions on powerpc are prefixed with ppc_/ppc32_/ppc64_ rather than the usual sys_/__se_sys prefix. fork(), clone(), swapcontext() are some examples of syscalls with such entry points. We need to match against these names when initializing ftrace syscall tracing. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-07powerpc/trace/syscalls: Update syscall name matching logicNaveen N. Rao
On powerpc64 ABIv1, we are enabling syscall tracing for only ~20 syscalls. This is due to commit e145242ea0df6 ("syscalls/core, syscalls/x86: Clean up syscall stub naming convention") which has changed the syscall entry wrapper prefix from "SyS" to "__se_sys". Update the logic for ABIv1 to not just skip the initial dot, but also the "__se_sys" prefix. Fixes: commit e145242ea0df6 ("syscalls/core, syscalls/x86: Clean up syscall stub naming convention") Reported-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-07powerpc/64: Remove unused paca->soft_enabledMichael Ellerman
In commit 4e26bc4a4ed6 ("powerpc/64: Rename soft_enabled to irq_soft_mask") we renamed paca->soft_enabled. But then in commit 8e0b634b1327 ("powerpc/64s: Do not allocate lppaca if we are not virtualized") we added it back. Oops. This happened because the two patches were in flight at the same time and rebased vs each other multiple times, and we missed it in review. Fixes: 8e0b634b1327 ("powerpc/64s: Do not allocate lppaca if we are not virtualized") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-07Revert "powerpc/powernv: Increase memory block size to 1GB on radix"Balbir Singh
This commit was a stop-gap to prevent crashes on hotunplug, caused by the mismatch between the 1G mappings used for the linear mapping and the memory block size. Those issues are now resolved because we split the linear mapping at hotunplug time if necessary, as implemented in commit 4dd5f8a99e79 ("powerpc/mm/radix: Split linear mapping on hot-unplug"). Signed-off-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Tested-by: Rashmica Gupta <rashmica.g@gmail.com> Tested-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-07powerpc/nohash: Use IS_ENABLED() to simplify __set_pte_at()Christophe Leroy
By using IS_ENABLED() we can simplify __set_pte_at() by removing redundant *ptep = pte. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-07powerpc/nohash: Remove _PAGE_BUSYChristophe Leroy
_PAGE_BUSY is always 0, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-07powerpc/nohash: Remove hash related code from nohash headers.Christophe Leroy
When nohash and book3s header were split, some hash related stuff remained in the nohash header. This patch removes them. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Duplicate pte_young() to avoid circular header dependency] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-07PCI: remove PCI_DMA_BUS_IS_PHYSChristoph Hellwig
This was used by the ide, scsi and networking code in the past to determine if they should bounce payloads. Now that the dma mapping always have to support dma to all physical memory (thanks to swiotlb for non-iommu systems) there is no need to this crude hack any more. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Palmer Dabbelt <palmer@sifive.com> (for riscv) Reviewed-by: Jens Axboe <axboe@kernel.dk>
2018-05-03bpf, ppc64: remove ld_abs/ld_indDaniel Borkmann
Since LD_ABS/LD_IND instructions are now removed from the core and reimplemented through a combination of inlined BPF instructions and a slow-path helper, we can get rid of the complexity from ppc64 JIT. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Tested-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-05-03powerpc/fadump: Unregister fadump on kexec down path.Mahesh Salgaonkar
Unregister fadump on kexec down path otherwise the fadump registration in new kexec-ed kernel complains that fadump is already registered. This makes new kernel to continue using fadump registered by previous kernel which may lead to invalid vmcore generation. Hence this patch fixes this issue by un-registering fadump in fadump_cleanup() which is called during kexec path so that new kernel can register fadump with new valid values. Fixes: b500afff11f6 ("fadump: Invalidate registration and release reserved memory for general use.") Cc: stable@vger.kernel.org # v3.4+ Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc/fadump: Do not use hugepages when fadump is activeHari Bathini
FADump capture kernel boots in restricted memory environment preserving the context of previous kernel to save vmcore. Supporting hugepages in such environment makes things unnecessarily complicated, as hugepages need memory set aside for them. This means most of the capture kernel's memory is used in supporting hugepages. In most cases, this results in out-of-memory issues while booting FADump capture kernel. But hugepages are not of much use in capture kernel whose only job is to save vmcore. So, disabling hugepages support, when fadump is active, is a reliable solution for the out of memory issues. Introducing a flag variable to disable HugeTLB support when fadump is active. Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com> Reviewed-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc/fadump: exclude memory holes while reserving memory in second kernelMahesh Salgaonkar
The second kernel, during early boot after the crash, reserves rest of the memory above boot memory size to make sure it does not touch any of the dump memory area. It uses memblock_reserve() that reserves the specified memory region irrespective of memory holes present within that region. There are chances where previous kernel would have hot removed some of its memory leaving memory holes behind. In such cases fadump kernel reports incorrect number of reserved pages through arch_reserved_kernel_pages() hook causing kernel to hang or panic. Fix this by excluding memory holes while reserving rest of the memory above boot memory size during second kernel boot after crash. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc: remove retired sbc834x supportPaul Gortmaker
I no longer have a functional version of this board for even the most basic sanity boot testing, and they have not been available for purchase for quite some years now. There is no point in adding a burden to testing coverage that does walk all the possible defconfigs, so with all the above in mind, it makes sense to remove it. Of course it will remain in the git history for anyone who happens to stumble on one and wants to tinker with it. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc: Only support DYNAMIC_FTRACE not staticMichael Ellerman
We've had dynamic ftrace support for over 9 years since Steve first wrote it, all the distros use dynamic, and static is basically untested these days, so drop support for static ftrace. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc64/ftrace: Implement support for ftrace_regs_caller()Naveen N. Rao
With -mprofile-kernel, we always save the full register state in ftrace_caller(). While this works, this is inefficient if we're not interested in the register state, such as when we're using the function tracer. Rename the existing ftrace_caller() as ftrace_regs_caller() and provide a simpler implementation for ftrace_caller() that is used when registers are not required to be saved. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc64/ftrace: Use the generic version of ftrace_replace_code()Naveen N. Rao
Our implementation matches that of the generic version, which also handles FTRACE_UPDATE_MODIFY_CALL. So, remove our implementation in favor of the generic version. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc64/module: Tighten detection of mcount call sites with -mprofile-kernelNaveen N. Rao
For R_PPC64_REL24 relocations, we suppress emitting instructions for TOC load/restore in the relocation stub if the relocation is for _mcount() call when using -mprofile-kernel ABI. To detect this, we check if the preceding instructions are per the standard set of instructions emitted by gcc: either the two instruction sequence of 'mflr r0; std r0,16(r1)', or the more optimized variant of a single 'mflr r0'. This is not sufficient since nothing prevents users from hand coding sequences involving a 'mflr r0' followed by a 'bl'. For removing the toc save instruction from the stub, we additionally check if the symbol is "_mcount". Add the same check here as well. Also rename is_early_mcount_callsite() to is_mprofile_mcount_callsite() since that is what is being checked. The use of "early" is misleading since there is nothing involving this function that qualifies as early. Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel ftrace ABI") Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc64/kexec: Hard disable ftrace before switching to the new kernelNaveen N. Rao
If function_graph tracer is enabled during kexec, we see the below exception in the simulator: root@(none):/# kexec -e kvm: exiting hardware virtualization kexec_core: Starting new kernel [ 19.262020070,5] OPAL: Switch to big-endian OS kexec: Starting switchover sequence. Interrupt to 0xC000000000004380 from 0xC000000000004380 ** Execution stopped: Continuous Interrupt, Instruction caused exception, ** Now that we have a more effective way to completely disable ftrace on ppc64, let's also use that before switching to a new kernel during kexec. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc64/ftrace: Disable ftrace during kvm entry/exitNaveen N. Rao
During guest entry/exit, we switch over to/from the guest MMU context and we cannot take exceptions in the hypervisor code. Since ftrace may be enabled and since it can result in us taking a trap, disable ftrace by setting paca->ftrace_enabled to zero. There are two paths through which we enter/exit a guest: 1. If we are the vcore runner, then we enter the guest via __kvmppc_vcore_entry() and we disable ftrace around this. This is always the case for Power9, and for the primary thread on Power8. 2. If we are a secondary thread in Power8, then we would be in nap due to SMT being disabled. We are woken up by an IPI to enter the guest. In this scenario, we enter the guest through kvm_start_guest(). We disable ftrace at this point. In this scenario, ftrace would only get re-enabled on the secondary thread when SMT is re-enabled (via start_secondary()). Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc64/ftrace: Disable ftrace during hotplugNaveen N. Rao
Disable ftrace when a cpu is about to go offline. When the cpu is woken up, ftrace will get enabled in start_secondary(). Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc64/ftrace: Delay enabling ftrace on secondary cpusNaveen N. Rao
On the boot cpu, though we enable paca->ftrace_enabled in early_setup() (via cpu_ready_for_interrupts()), we don't start tracing until much later since ftrace is not initialized yet and since we only support DYNAMIC_FTRACE on powerpc. However, it is possible that ftrace has been initialized by the time some of the secondary cpus start up. In this case, we will try to trace some of the early boot code which can cause problems. To address this, move setting paca->ftrace_enabled from cpu_ready_for_interrupts() to early_setup() for the boot cpu, and towards the end of start_secondary() for secondary cpus. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc64/ftrace: Add helpers to hard disable ftraceNaveen N. Rao
Add some helpers to enable/disable ftrace through paca->ftrace_enabled. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc64/ftrace: Rearrange #ifdef sections in ftrace.hNaveen N. Rao
Re-arrange the last #ifdef section in preparation for a subsequent change. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-03powerpc64/ftrace: Add a field in paca to disable ftrace in unsafe code pathsNaveen N. Rao
We have some C code that we call into from real mode where we cannot take any exceptions. Though the C functions themselves are mostly safe, if these functions are traced, there is a possibility that we may take an exception. For instance, in certain conditions, the ftrace code uses WARN(), which uses a 'trap' to do its job. For such scenarios, introduce a new field in paca 'ftrace_enabled', which is checked on ftrace entry before continuing. This field can then be set to zero to disable/pause ftrace, and set to a non-zero value to resume ftrace. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-02Merge branch 'timers/urgent' into timers/coreThomas Gleixner
Pick up urgent fixes to apply dependent cleanup patch
2018-04-27Merge tag 'v4.17-rc2' into docs-nextJonathan Corbet
Merge -rc2 to pick up the changes to Documentation/core-api/kernel-api.rst that hit mainline via the networking tree. In their absence, subsequent patches cannot be applied.
2018-04-27powerpc/kvm/booke: Fix altivec related build breakLaurentiu Tudor
Add missing "altivec unavailable" interrupt injection helper thus fixing the linker error below: arch/powerpc/kvm/emulate_loadstore.o: In function `kvmppc_check_altivec_disabled': arch/powerpc/kvm/emulate_loadstore.c: undefined reference to `.kvmppc_core_queue_vec_unavail' Fixes: 09f984961c137c4b ("KVM: PPC: Book3S: Add MMIO emulation for VMX instructions") Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-04-27powerpc: Fix deadlock with multiple calls to smp_send_stopNicholas Piggin
smp_send_stop can lock up the IPI path for any subsequent calls, because the receiving CPUs spin in their handler function. This started becoming a problem with the addition of an smp_send_stop call in the reboot path, because panics can reboot after doing their own smp_send_stop. The NMI IPI variant was fixed with ac61c11566 ("powerpc: Fix smp_send_stop NMI IPI handling"), which leaves the smp_call_function variant. This is fixed by having smp_send_stop only ever do the smp_call_function once. This is a bit less robust than the NMI IPI fix, because any other call to smp_call_function after smp_send_stop could deadlock, but that has always been the case, and it was not been a problem before. Fixes: f2748bdfe1573 ("powerpc/powernv: Always stop secondaries before reboot/shutdown") Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>