summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2019-02-05KVM: s390: use pending_irqs_no_gisa() where appropriateMichael Mueller
Interruption types that are not represented in GISA shall use pending_irqs_no_gisa() to test pending interruptions. Signed-off-by: Michael Mueller <mimu@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Message-Id: <20190131085247.13826-6-mimu@linux.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-02-05KVM: s390: coding style kvm_s390_gisa_init/clear()Michael Mueller
The change helps to reduce line length and increases code readability. Signed-off-by: Michael Mueller <mimu@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Message-Id: <20190131085247.13826-5-mimu@linux.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-02-05KVM: s390: move bitmap idle_mask into arch struct top levelMichael Mueller
The vcpu idle_mask state is used by but not specific to the emulated floating interruptions. The state is relevant to gisa related interruptions as well. Signed-off-by: Michael Mueller <mimu@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Acked-by: Halil Pasic <pasic@linux.ibm.com> Message-Id: <20190131085247.13826-4-mimu@linux.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-02-05KVM: s390: make bitmap declaration consistentMichael Mueller
Use a consistent bitmap declaration throughout the code. Signed-off-by: Michael Mueller <mimu@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Message-Id: <20190131085247.13826-3-mimu@linux.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-02-05KVM: s390: drop obsolete else pathMichael Mueller
The explicit else path specified in set_intercept_indicators_io is not required as the function returns in case the first branch is taken anyway. Signed-off-by: Michael Mueller <mimu@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Message-Id: <20190131085247.13826-2-mimu@linux.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-02-05KVM: s390: clarify kvm related kernel messageMichael Mueller
As suggested by our ID dept. here are some kernel message updates. Signed-off-by: Michael Mueller <mimu@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2019-02-05arm64: kexec_file: handle empty command-lineJean-Philippe Brucker
Calling strlen() on cmdline == NULL produces a kernel oops. Since having a NULL cmdline is valid, handle this case explicitly. Fixes: 52b2a8af7436 ("arm64: kexec_file: load initrd and device-tree") Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-02-05x86/resctrl: Remove duplicate MSR_MISC_FEATURE_CONTROL definitionReinette Chatre
The definition of MSR_MISC_FEATURE_CONTROL was first introduced in 98af74599ea0 ("x86 msr_index.h: Define MSR_MISC_FEATURE_CONTROL") and present in Linux since v4.11. The Cache Pseudo-Locking code added this duplicate definition in more recent f2a177292bd0 ("x86/intel_rdt: Discover supported platforms via prefetch disable bits"), available since v4.19. Remove the duplicate definition from the resctrl subsystem and let that code obtain the needed definition from the core architecture msr-index.h instead. Fixes: f2a177292bd0 ("x86/intel_rdt: Discover supported platforms via prefetch disable bits") Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: gavin.hindman@intel.com Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: jithu.joseph@intel.com Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/ff6b95d9b6ef6f4ac96267f130719ba1af09614b.1549312475.git.reinette.chatre@intel.com
2019-02-05powerpc/32: Include .branch_lt in data sectionJoel Stanley
When building a 32 bit powerpc kernel with Binutils 2.31.1 this warning is emitted: powerpc-linux-gnu-ld: warning: orphan section `.branch_lt' from `arch/powerpc/kernel/head_44x.o' being placed in section `.branch_lt' As of binutils commit 2d7ad24e8726 ("Support PLT16 relocs against local symbols")[1], 32 bit targets can produce .branch_lt sections in their output. Include these symbols in the .data section as the ppc64 kernel does. [1] https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commitdiff;h=2d7ad24e8726ba4c45c9e67be08223a146a837ce Signed-off-by: Joel Stanley <joel@jms.id.au> Reviewed-by: Alan Modra <amodra@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-04net: phy: fixed-phy: Drop GPIO from fixed_phy_add()Linus Walleij
All users of the fixed_phy_add() pass -1 as GPIO number to the fixed phy driver, and all users of fixed_phy_register() pass -1 as GPIO number as well, except for the device tree MDIO bus. Any new users should create a proper device and pass the GPIO as a descriptor associated with the device so delete the GPIO argument from the calls and drop the code looking requesting a GPIO in fixed_phy_add(). In fixed phy_register(), investigate the "fixed-link" node and pick the GPIO descriptor from "link-gpios" if this property exists. Move the corresponding code out of of_mdio.c as the fixed phy code anyways requires OF to be in use. Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-02-05powerpc/eeh: Correct retries in eeh_pe_reset_full()Sam Bobroff
Currently, eeh_pe_reset_full() will only attempt to reset a PE more than once if activating the reset state and deactivating it both succeed, but later polling shows that it hasn't become active. Change this so that it will try up to three times for any reason other than an unrecoverable slot error and adjust the message generation so that it's clear weather the reset has ultimately succeeded or failed. This allows the reset to succeed in some situations where it would currently fail. Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-05powerpc/eeh: Improve recovery of passed-through devicesSam Bobroff
Currently, the EEH recovery process considers passed-through devices as if they were not EEH-aware, which can cause them to be removed as part of recovery. Because device removal requires cooperation from the guest, this may lead to the process stalling or deadlocking. Also, if devices are removed on the host side, they will be removed from their IOMMU group, making recovery in the guest impossible. Therefore, alter the recovery process so that passed-through devices are not removed but are instead left frozen (and marked isolated) until the guest performs it's own recovery. If firmware thaws a passed-through PE because it's parent PE has been thawed (because it was not passed through), re-freeze it. Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-05powerpc/eeh: Add include_passed to eeh_clear_pe_frozen_state()Sam Bobroff
Add a parameter to eeh_clear_pe_frozen_state() that allows passed-through PEs to be excluded. Update callers to always pass true so that there is no change in behaviour. This is to prepare for follow-up work for passed-through devices. Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-05powerpc/eeh: Add include_passed to eeh_pe_state_clear()Sam Bobroff
Add a parameter to eeh_pe_state_clear() that allows passed-through PEs to be excluded. Update callers to always pass true so that there is no change in behaviour. Also refactor to use direct traversal, to allow the removal of some boilerplate. This is to prepare for follow-up work for passed-through devices. Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-05powerpc/eeh: remove sw_state from eeh_unfreeze_pe()Sam Bobroff
eeh_unfreeze_pe() performs two operations: unfreezing a PE (which may cause firmware to unfreeze child PEs as well) and de-isolating the PE and it's children. To simplify this and support future work, separate out the de-isolation and perform it at the call sites (when necessary). There should be no change in behaviour. Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-05powerpc/eeh: Cleanup eeh_pe_clear_frozen_state()Sam Bobroff
The 'clear_sw_state' parameter for eeh_pe_clear_frozen_state() is redundant because it has no effect (except in the rare case of a hardware error part way through unfreezing a tree of PEs, where it would dangerously allow partial de-isolation before returning failure). It is passed down to __eeh_pe_clear_frozen_state(), and from there to eeh_unfreeze_pe(), where it causes EEH_PE_ISOLATED to be removed from the state of each PE during the traversal. However, when the traversal finishes, EEH_PE_ISOLATED is unconditionally removed by a call to eeh_pe_state_clear() regardless of the parameter's value. So remove the flag and pass false to eeh_unfreeze_pe() (to avoid the rare case described above, as it was before the flag was introduced). Also, perform the recursion directly in the function and eliminate a bit of boilerplate. There should be no change in functionality, except as mentioned above. Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-04MIPS: Remove function size check in get_frame_info()Jun-Ru Chang
Patch (b6c7a324df37b "MIPS: Fix get_frame_info() handling of microMIPS function size.") introduces additional function size check for microMIPS by only checking insn between ip and ip + func_size. However, func_size in get_frame_info() is always 0 if KALLSYMS is not enabled. This causes get_frame_info() to return immediately without calculating correct frame_size, which in turn causes "Can't analyze schedule() prologue" warning messages at boot time. This patch removes func_size check, and let the frame_size check run up to 128 insns for both MIPS and microMIPS. Signed-off-by: Jun-Ru Chang <jrjang@realtek.com> Signed-off-by: Tony Wu <tonywu@realtek.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: b6c7a324df37b ("MIPS: Fix get_frame_info() handling of microMIPS function size.") Cc: <ralf@linux-mips.org> Cc: <jhogan@kernel.org> Cc: <macro@mips.com> Cc: <yamada.masahiro@socionext.com> Cc: <peterz@infradead.org> Cc: <mingo@kernel.org> Cc: <linux-mips@vger.kernel.org> Cc: <linux-kernel@vger.kernel.org>
2019-02-04MIPS: Loongson32: Remove DMA & NAND devices from ls1b/board.cPaul Burton
Commit 7b3415f581c7 ("MIPS: Loongson32: Remove unused platform devices") removed the definitions of platform devices which have no in tree drivers from common Loongson32 code, but missed their removal from Loongson 1B board code in arch/mips/loongson32/ls1b/board.c. This causes build failures due to the missing declarations of ls1x_dma_pdev, ls1x_nand_pdev & their associated *_set_platdata functions. Remove the dead code from arch/mips/loongson32/ls1b/board.c to fix the build. Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: 7b3415f581c7 ("MIPS: Loongson32: Remove unused platform devices")
2019-02-04MIPS: Loongson32: Fix config brokenness; select SYS_SUPPORTS_32BIT_KERNELPaul Burton
Commit a96d68ba3b41 ("MIPS: Loongson32: clarify we don't support MIPS16 and merge configs") attempted to reduce duplication in Kconfig by consolidating some selects common to Loongson 1B & 1C CPUs under CPU_LOONGSON1. Unfortunately it clearly wasn't tested because by removing SYS_SUPPORTS_32BIT_KERNEL it prevented 32BIT from being enabled leading to all sorts of strange build errors from a kernel configured to build as neither 32 nor 64 bit. Both loongson1b_defconfig & loongson1c_defconfig failed to build due to this problem. Revert the cleanup portions of commit a96d68ba3b41 ("MIPS: Loongson32: clarify we don't support MIPS16 and merge configs"), keeping only its removal of the selection of SYS_SUPPORTS_MIPS16. Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: a96d68ba3b41 ("MIPS: Loongson32: clarify we don't support MIPS16 and merge configs")
2019-02-04kexec, KEYS: Make use of platform keyring for signature verifyKairui Song
This patch allows the kexec_file_load syscall to verify the PE signed kernel image signature based on the preboot keys stored in the .platform keyring, as fall back, if the signature verification failed due to not finding the public key in the secondary or builtin keyrings. This commit adds a VERIFY_USE_PLATFORM_KEYRING similar to previous VERIFY_USE_SECONDARY_KEYRING indicating that verify_pkcs7_signature should verify the signature using platform keyring. Also, decrease the error message log level when verification failed with -ENOKEY, so that if called tried multiple time with different keyring it won't generate extra noises. Signed-off-by: Kairui Song <kasong@redhat.com> Cc: David Howells <dhowells@redhat.com> Acked-by: Dave Young <dyoung@redhat.com> (for kexec_file_load part) [zohar@linux.ibm.com: tweaked the first paragraph of the patch description, and fixed checkpatch warning.] Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
2019-02-04MIPS: Don't select ARCH_HAS_SYNC_DMA_FOR_CPU when DMA is coherentPaul Burton
Commit f263f2a2c682 ("MIPS: Compile post DMA flush only when needed") pushed the selection of ARCH_HAS_SYNC_DMA_FOR_CPU down to various SYS_HAS_CPU_* Kconfig entries corresponding to CPUs for which cpu_needs_post_dma_flush() might return true, but unfortunately missed the fact that some of these CPUs can be used in configurations with DMA_NONCOHERENT=n. When this is the case the kernel build does not include our definition of arch_sync_dma_for_cpu() from arch/mips/mm/dma-noncoherent.c and the build fails with a link error. One example of this problem is ip27_defconfig: kernel/dma/direct.o: In function `dma_direct_sync_single_for_cpu': direct.c:(.text+0x6c): undefined reference to `arch_sync_dma_for_cpu' kernel/dma/direct.o: In function `dma_direct_sync_sg_for_cpu': direct.c:(.text+0x1f0): undefined reference to `arch_sync_dma_for_cpu' kernel/dma/direct.o: In function `dma_direct_alloc': direct.c:(.text+0xc20): undefined reference to `arch_dma_alloc' kernel/dma/direct.o: In function `dma_direct_free': direct.c:(.text+0xc3c): undefined reference to `arch_dma_free' make[1]: *** [Makefile:1021: vmlinux] Error 1 make: *** [Makefile:152: sub-make] Error 2 Fix this by selecting ARCH_HAS_SYNC_DMA_FOR_CPU only when DMA_NONCOHERENT is also selected. The SYS_HAS_CPU_BMIPS5000 case is left as-is because systems with that CPU always select DMA_NONCOHERENT anyway. Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: f263f2a2c682 ("MIPS: Compile post DMA flush only when needed")
2019-02-04MIPS: Use lower case for addresses in nexys4ddr.dtsPaul Burton
DTC introduced an i2c_bus_reg check in v1.4.7, used since Linux v4.20, which complains about upper case addresses used in the unit name. nexys4ddr.dts names an I2C device node "ad7420@4B", leading to: arch/mips/boot/dts/xilfpga/nexys4ddr.dts:109.16-112.8: Warning (i2c_bus_reg): /i2c@10A00000/ad7420@4B: I2C bus unit address format error, expected "4b" Fix this by switching to lower case addresses throughout the file, as is *mostly* the case in the file already & fairly standard throughout the tree. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: stable@vger.kernel.org # v4.20+ Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: Enable hugepage support for MIPS64r6Paul Burton
Our hugepage support already exists for MIPS64 CPUs, and is already enabled for older architecture revisions. There's nothing MIPSr6 specific involved, and our hugepage support already works fine for MIPS64r6 CPUs such as the I6500, so allow it to be selected in Kconfig. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: Remove open-coded cmpxchg() in set_pte()Paul Burton
set_pte() contains an open coded version of cmpxchg() - it atomically replaces the buddy pte's value if it is currently zero. Simplify the code considerably by just using cmpxchg() instead of reinventing it. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: MemoryMapID (MMID) SupportPaul Burton
Introduce support for using MemoryMapIDs (MMIDs) as an alternative to Address Space IDs (ASIDs). The major difference between the two is that MMIDs are global - ie. an MMID uniquely identifies an address space across all coherent CPUs. In contrast ASIDs are non-global per-CPU IDs, wherein each address space is allocated a separate ASID for each CPU upon which it is used. This global namespace allows a new GINVT instruction be used to globally invalidate TLB entries associated with a particular MMID across all coherent CPUs in the system, removing the need for IPIs to invalidate entries with separate ASIDs on each CPU. The allocation scheme used here is largely borrowed from arm64 (see arch/arm64/mm/context.c). In essence we maintain a bitmap to track available MMIDs, and MMIDs in active use at the time of a rollover to a new MMID version are preserved in the new version. The allocation scheme requires efficient 64 bit atomics in order to perform reasonably, so this support depends upon CONFIG_GENERIC_ATOMIC64=n (ie. currently it will only be included in MIPS64 kernels). The first, and currently only, available CPU with support for MMIDs is the MIPS I6500. This CPU supports 16 bit MMIDs, and so for now we cap our MMIDs to 16 bits wide in order to prevent the bitmap growing to absurd sizes if any future CPU does implement 32 bit MMIDs as the architecture manuals suggest is recommended. When MMIDs are in use we also make use of GINVT instruction which is available due to the global nature of MMIDs. By executing a sequence of GINVT & SYNC 0x14 instructions we can avoid the overhead of an IPI to each remote CPU in many cases. One complication is that GINVT will invalidate wired entries (in all cases apart from type 0, which targets the entire TLB). In order to avoid GINVT invalidating any wired TLB entries we set up, we make sure to create those entries using a reserved MMID (0) that we never associate with any address space. Also of note is that KVM will require further work in order to support MMIDs & GINVT, since KVM is involved in allocating IDs for guests & in configuring the MMU. That work is not part of this patch, so for now when MMIDs are in use KVM is disabled. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: Add GINVT instruction helpersPaul Burton
Add a family of ginvt_* functions making it easy to emit a GINVT instruction to globally invalidate TLB entries. We make use of the _ASM_MACRO infrastructure to support emitting the instructions even if the assembler isn't new enough to support them natively. An associated STYPE_GINV definition & sync_ginv() function are added to emit a sync instruction of type 0x14, which operates as a completion barrier for these new GINVT (and GINVI) instructions. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Add set_cpu_context() for ASID assignmentsPaul Burton
When we gain MMID support we'll be storing MMIDs as atomic64_t values and accessing them via atomic64_* functions. This necessitates that we don't use cpu_context() as the left hand side of an assignment, ie. as a modifiable lvalue. In preparation for this introduce a new set_cpu_context() function & replace all assignments with cpu_context() on their left hand side with an equivalent call to set_cpu_context(). To enforce that cpu_context() should not be used for assignments, we rewrite it as a static inline function. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Unify ASID version checksPaul Burton
Introduce a new check_mmu_context() function to check an mm's ASID version & get a new one if it's outdated, and a check_switch_mmu_context() function which additionally sets up the new ASID & page directory. Simplify switch_mm() & various get_new_mmu_context() callsites in MIPS KVM by making use of the new functions, which will help reduce the amount of code that requires modification to gain MMID support. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Un-inline get_new_mmu_contextPaul Burton
In preparation for adding MMID support to get_new_mmu_context() which will increase the size of the function somewhat, move it from asm/mmu_context.h into a C file. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Split obj-y to a file per linePaul Burton
Split always-included objects to one per line in order to make it easier to modify the list of included objects. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Remove local_flush_tlb_mm()Paul Burton
All 3 variants of local_flush_tlb_mm() are now effectively simple calls to drop_mmu_context(). Remove them and use drop_mmu_context() directly. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Remove redundant preempt_disable in local_flush_tlb_mm()Paul Burton
The r4k variant of local_flush_tlb_mm() wraps its call to drop_mmu_context() with a preempt_disable() & preempt_enable() pair, but this is redundant since drop_mmu_context() disables interrupts and from Documentation/preempt-locking.txt: Note that you do not need to explicitly prevent preemption if you are holding any locks or interrupts are disabled, since preemption is implicitly disabled in those cases. Remove the redundant preempt_disable() & preempt_enable() calls. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Move drop_mmu_context() comment into appropriate blockPaul Burton
drop_mmu_context() is preceded by a comment indicating what happens if the mm provided is currently active on the local CPU. Move that comment into the block that executes in this case, adjusting slightly to reflect its new location. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Consolidate drop_mmu_context() has-ASID checksPaul Burton
If an mm does not have an ASID on the local CPU then drop_mmu_context() is always redundant, since there's no context to "drop". Various callers of drop_mmu_context() check whether the mm has been allocated an ASID before making the call. Move that check into drop_mmu_context() and remove it from callers to simplify them. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Avoid HTW stop/start when dropping an inactive mmPaul Burton
If drop_mmu_context() is called with an mm that is not currently active on the local CPU then there's no need for us to stop & start a hardware page table walker because it can't be fetching entries for the ASID corresponding to the mm we're operating on. Move the htw_stop() & htw_start() calls into the block which we run only if the mm is currently active, in order to avoid the redundant work. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Remove redundant get_new_mmu_context() cpu argumentPaul Burton
get_new_mmu_context() accepts a cpu argument, but implicitly assumes that this is always equal to smp_processor_id() by operating on the local CPU's TLB & icache. Remove the cpu argument and have get_new_mmu_context() call smp_processor_id() instead. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Remove redundant drop_mmu_context() cpu argumentPaul Burton
The drop_mmu_context() function accepts a cpu argument, but it implicitly expects that this is always equal to smp_processor_id() by allocating & configuring an ASID on the local CPU when the mm is active on the CPU indicated by the cpu argument. All callers do provide the value of smp_processor_id() to the cpu argument. Remove the redundant argument and have drop_mmu_context() call smp_processor_id() itself, making it clearer that the cpu variable always represents the local CPU. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: mm: Define activate_mm() using switch_mm()Paul Burton
MIPS has separate definitions of activate_mm() & switch_mm() which are identical apart from switch_mm() checking that the ASID is valid before acquiring a new one. We know that when activate_mm() is called cpu_context(X, mm) will be zero, and this will never be considered a valid ASID because we never allow the ASID version number to be zero, instead beginning with version 1 using asid_first_version(). Therefore switch_mm() will always allocate a new ASID when called for a new task, meaning that it will behave identically to activate_mm(). Take advantage of this to remove the duplication & define activate_mm() using switch_mm() just like many other architectures do. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
2019-02-04MIPS: Loongson: Introduce and use loongson_llsc_mb()Huacai Chen
On the Loongson-2G/2H/3A/3B there is a hardware flaw that ll/sc and lld/scd is very weak ordering. We should add sync instructions "before each ll/lld" and "at the branch-target between ll/sc" to workaround. Otherwise, this flaw will cause deadlock occasionally (e.g. when doing heavy load test with LTP). Below is the explaination of CPU designer: "For Loongson 3 family, when a memory access instruction (load, store, or prefetch)'s executing occurs between the execution of LL and SC, the success or failure of SC is not predictable. Although programmer would not insert memory access instructions between LL and SC, the memory instructions before LL in program-order, may dynamically executed between the execution of LL/SC, so a memory fence (SYNC) is needed before LL/LLD to avoid this situation. Since Loongson-3A R2 (3A2000), we have improved our hardware design to handle this case. But we later deduce a rarely circumstance that some speculatively executed memory instructions due to branch misprediction between LL/SC still fall into the above case, so a memory fence (SYNC) at branch-target (if its target is not between LL/SC) is needed for Loongson 3A1000, 3B1500, 3A2000 and 3A3000. Our processor is continually evolving and we aim to to remove all these workaround-SYNCs around LL/SC for new-come processor." Here is an example: Both cpu1 and cpu2 simutaneously run atomic_add by 1 on same atomic var, this bug cause both 'sc' run by two cpus (in atomic_add) succeed at same time('sc' return 1), and the variable is only *added by 1*, sometimes, which is wrong and unacceptable(it should be added by 2). Why disable fix-loongson3-llsc in compiler? Because compiler fix will cause problems in kernel's __ex_table section. This patch fix all the cases in kernel, but: +. the fix at the end of futex_atomic_cmpxchg_inatomic is for branch-target of 'bne', there other cases which smp_mb__before_llsc() and smp_llsc_mb() fix the ll and branch-target coincidently such as atomic_sub_if_positive/ cmpxchg/xchg, just like this one. +. Loongson 3 does support CONFIG_EDAC_ATOMIC_SCRUB, so no need to touch edac.h +. local_ops and cmpxchg_local should not be affected by this bug since only the owner can write. +. mips_atomic_set for syscall.c is deprecated and rarely used, just let it go Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Huang Pei <huangpei@loongson.cn> [paul.burton@mips.com: - Simplify the addition of -mno-fix-loongson3-llsc to cflags, and add a comment describing why it's there. - Make loongson_llsc_mb() a no-op when CONFIG_CPU_LOONGSON3_WORKAROUNDS=n, rather than a compiler memory barrier. - Add a comment describing the bug & how loongson_llsc_mb() helps in asm/barrier.h.] Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: ambrosehua@gmail.com Cc: Steven J . Hill <Steven.Hill@cavium.com> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: Li Xuefeng <lixuefeng@loongson.cn> Cc: Xu Chenghua <xuchenghua@loongson.cn>
2019-02-04s390: bpf: fix JMP32 code-genHeiko Carstens
Commit 626a5f66da0d19 ("s390: bpf: implement jitting of JMP32") added JMP32 code-gen support for s390. However it triggers the warning below due to some unusual gotos in the original s390 bpf jit code. Add a couple of additional "is_jmp32" initializations to fix this. Also fix the wrong opcode for the "llilf" instruction that was introduced with the same commit. arch/s390/net/bpf_jit_comp.c: In function 'bpf_jit_insn': arch/s390/net/bpf_jit_comp.c:248:55: warning: 'is_jmp32' may be used uninitialized in this function [-Wmaybe-uninitialized] _EMIT6(op1 | reg(b1, b2) << 16 | (rel & 0xffff), op2 | mask); \ ^ arch/s390/net/bpf_jit_comp.c:1211:8: note: 'is_jmp32' was declared here bool is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32; Fixes: 626a5f66da0d19 ("s390: bpf: implement jitting of JMP32") Cc: Jiong Wang <jiong.wang@netronome.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: Jiong Wang <jiong.wang@netronome.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-02-04arm64: entry: Remove unneeded need_resched() loopValentin Schneider
Since the enabling and disabling of IRQs within preempt_schedule_irq() is contained in a need_resched() loop, we don't need the outer arch code loop. Reported-by: Julien Thierry <julien.thierry@arm.com> Reported-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Julien Grall <julien.grall@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-02-04ARM: socfpga_defconfig: enable BLK_DEV_LOOP config optionDinh Nguyen
Add CONFIG_BLK_DEV_LOOP and clean up socfpga_defconfig by make savedefconfig. Signed-off-by: Dinh Nguyen <dinguyen@kernel.org>
2019-02-04arm64: ptdump: Don't iterate kernel page tables using PTRS_PER_PXXWill Deacon
When 52-bit virtual addressing is enabled for userspace (CONFIG_ARM64_USER_VA_BITS_52=y), the kernel continues to utilise 48-bit virtual addressing in TTBR1. Consequently, PTRS_PER_PGD reflects the larger page table size for userspace and the pgd pointer for kernel page tables is offset before being written to TTBR1. This means that we can't use PTRS_PER_PGD to iterate over kernel page tables unless we apply the same offset, which is fiddly to get right and leads to some non-idiomatic walking code. Instead, just follow the usual pattern when walking page tables by using a while loop driven by pXd_offset() and pXd_addr_end(). Reported-by: Qian Cai <cai@lca.pw> Tested-by: Qian Cai <cai@lca.pw> Acked-by: Steve Capper <steve.capper@arm.com> Tested-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-02-04powerpc: Drop page_is_ram() and walk_system_ram_range()Christophe Leroy
Since commit c40dd2f76644 ("powerpc: Add System RAM to /proc/iomem") it is possible to use the generic walk_system_ram_range() and the generic page_is_ram(). To enable the use of walk_system_ram_range() by the IBM EHEA ethernet driver, we still need an export of the generic function. As powerpc was the only user of CONFIG_ARCH_HAS_WALK_MEMORY, the ifdef around the generic walk_system_ram_range() has become useless and can be dropped. Fixes: c40dd2f76644 ("powerpc: Add System RAM to /proc/iomem") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Keep the EXPORT_SYMBOL_GPL in powerpc code] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-04arm64: dts: hikey: Revert "Enable HS200 mode on eMMC"Alistair Strachan
This reverts commit abd7d0972a192ee653efc7b151a6af69db58f2bb. This change was already partially reverted by John Stultz in commit 9c6d26df1fae ("arm64: dts: hikey: Fix eMMC corruption regression"). This change appears to cause controller resets and block read failures which prevents successful booting on some hikey boards. Cc: Ryan Grachek <ryan@edited.us> Cc: Wei Xu <xuwei5@hisilicon.com> Cc: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: devicetree@vger.kernel.org Cc: stable <stable@vger.kernel.org> #4.17+ Signed-off-by: Alistair Strachan <astrachan@google.com> Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Wei Xu <xuwei5@hisilicon.com>
2019-02-04arm64: dts: hikey: Give wifi some time after power-onJan Kiszka
Somewhere along recent changes to power control of the wl1835, power-on became very unreliable on the hikey, failing like this: wl1271_sdio: probe of mmc2:0001:1 failed with error -16 wl1271_sdio: probe of mmc2:0001:2 failed with error -16 After playing with some dt parameters and comparing to other users of this chip, it turned out we need some power-on delay to make things stable again. In contrast to those other users which define 200 ms, the hikey would already be happy with 1 ms. Still, we use the safer 10 ms, like on the Ultra96. Fixes: ea452678734e ("arm64: dts: hikey: Fix WiFi support") Cc: <stable@vger.kernel.org> #4.12+ Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Wei Xu <xuwei5@hisilicon.com>
2019-02-04refcount_t: Add ACQUIRE ordering on success for dec(sub)_and_test() variantsElena Reshetova
This adds an smp_acquire__after_ctrl_dep() barrier on successful decrease of refcounter value from 1 to 0 for refcount_dec(sub)_and_test variants and therefore gives stronger memory ordering guarantees than prior versions of these functions. Co-developed-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Elena Reshetova <elena.reshetova@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: dvyukov@google.com Cc: keescook@chromium.org Cc: stern@rowland.harvard.edu Link: https://lkml.kernel.org/r/1548847131-27854-2-git-send-email-elena.reshetova@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04Merge branch 'perf/urgent' into perf/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04perf/x86/intel: Delay memory deallocation until x86_pmu_dead_cpu()Peter Zijlstra
intel_pmu_cpu_prepare() allocated memory for ->shared_regs among other members of struct cpu_hw_events. This memory is released in intel_pmu_cpu_dying() which is wrong. The counterpart of the intel_pmu_cpu_prepare() callback is x86_pmu_dead_cpu(). Otherwise if the CPU fails on the UP path between CPUHP_PERF_X86_PREPARE and CPUHP_AP_PERF_X86_STARTING then it won't release the memory but allocate new memory on the next attempt to online the CPU (leaking the old memory). Also, if the CPU down path fails between CPUHP_AP_PERF_X86_STARTING and CPUHP_PERF_X86_PREPARE then the CPU will go back online but never allocate the memory that was released in x86_pmu_dying_cpu(). Make the memory allocation/free symmetrical in regard to the CPU hotplug notifier by moving the deallocation to intel_pmu_cpu_dead(). This started in commit: a7e3ed1e47011 ("perf: Add support for supplementary event registers"). In principle the bug was introduced in v2.6.39 (!), but it will almost certainly not backport cleanly across the big CPU hotplug rewrite between v4.7-v4.15... [ bigeasy: Added patch description. ] [ mingo: Added backporting guidance. ] Reported-by: He Zhe <zhe.he@windriver.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> # With developer hat on Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> # With maintainer hat on Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@kernel.org Cc: bp@alien8.de Cc: hpa@zytor.com Cc: jolsa@kernel.org Cc: kan.liang@linux.intel.com Cc: namhyung@kernel.org Cc: <stable@vger.kernel.org> Fixes: a7e3ed1e47011 ("perf: Add support for supplementary event registers"). Link: https://lkml.kernel.org/r/20181219165350.6s3jvyxbibpvlhtq@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04perf/x86/intel/uncore: Add Node ID maskKan Liang
Some PCI uncore PMUs cannot be registered on an 8-socket system (HPE Superdome Flex). To understand which Socket the PCI uncore PMUs belongs to, perf retrieves the local Node ID of the uncore device from CPUNODEID(0xC0) of the PCI configuration space, and the mapping between Socket ID and Node ID from GIDNIDMAP(0xD4). The Socket ID can be calculated accordingly. The local Node ID is only available at bit 2:0, but current code doesn't mask it. If a BIOS doesn't clear the rest of the bits, an incorrect Node ID will be fetched. Filter the Node ID by adding a mask. Reported-by: Song Liu <songliubraving@fb.com> Tested-by: Song Liu <songliubraving@fb.com> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> # v3.7+ Fixes: 7c94ee2e0917 ("perf/x86: Add Intel Nehalem and Sandy Bridge-EP uncore support") Link: https://lkml.kernel.org/r/1548600794-33162-1-git-send-email-kan.liang@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>