summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2020-05-28powerpc/40x: Remove WALNUTChristophe Leroy
CONFIG_WALNUT is not selected by any config and is based on 405GP which is obsolete. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ab46013d8d33346af68faf30a719a586c3befad9.1590079968.git.christophe.leroy@csgroup.eu
2020-05-28powerpc/40x: Remove STB03xxxChristophe Leroy
CONFIG_STB03xxx is not user selectable and is not selected by any config. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d7d73f9a8ee3a890566abace568101e9b4836016.1590079968.git.christophe.leroy@csgroup.eu
2020-05-28powerpc/40x: Remove support for IBM 403GCXChristophe Leroy
CONFIG_403GCX is not user selectable and is not selected by any platform. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/635f8f5ce9d1f761b3bd8dc3e8ddad500cea26c4.1590079968.git.christophe.leroy@csgroup.eu
2020-05-28powerpc/pgtable: Drop PTE_ATOMIC_UPDATESChristophe Leroy
40x was the last user of PTE_ATOMIC_UPDATES. Drop everything related to PTE_ATOMIC_UPDATES. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/dbe8438fd1ed3e500132c8ab70269d4e6cc84531.1590079968.git.christophe.leroy@csgroup.eu
2020-05-28powerpc/40x: Rework 40x PTE access and TLB missChristophe Leroy
Commit 1bc54c03117b ("powerpc: rework 4xx PTE access and TLB miss") reworked 44x PTE access to avoid atomic pte updates, and left 8xx, 40x and fsl booke with atomic pte updates. Commit 6cfd8990e27d ("powerpc: rework FSL Book-E PTE access and TLB miss") removed atomic pte updates on fsl booke. It went away on 8xx with commit ddfc20a3b9ae ("powerpc/8xx: Remove PTE_ATOMIC_UPDATES"). 40x is the last platform setting PTE_ATOMIC_UPDATES. Rework PTE access and TLB miss to remove PTE_ATOMIC_UPDATES for 40x: - Always handle DSI as a fault. - Bail out of TLB miss handler when CONFIG_SWAP is set and _PAGE_ACCESSED is not set. - Bail out of ITLB miss handler when _PAGE_EXEC is not set. - Only set WR bit when both _PAGE_RW and _PAGE_DIRTY are set. - Remove _PAGE_HWWRITE - Don't require PTE_ATOMIC_UPDATES anymore Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/99a0fcd337ef67088140d1647d75fea026a70413.1590079968.git.christophe.leroy@csgroup.eu
2020-05-28powerpc: Remove Xilinx PPC405/PPC440 supportMichal Simek
The latest Xilinx design tools called ISE and EDK has been released in October 2013. New tool doesn't support any PPC405/PPC440 new designs. These platforms are no longer supported and tested. PowerPC 405/440 port is orphan from 2013 by commit cdeb89943bfc ("MAINTAINERS: Fix incorrect status tag") and commit 19624236cce1 ("MAINTAINERS: Update Grant's email address and maintainership") that's why it is time to remove the support fot these platforms. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8c593895e2cb57d232d85ce4d8c3a1aa7f0869cc.1590079968.git.christophe.leroy@csgroup.eu
2020-05-28powerpc/64: Refactor interrupt exit irq disabling sequenceNicholas Piggin
The same complicated sequence for juggling EE, RI, soft mask, and irq tracing is repeated 3 times, tidy these up into one function. This differs qiute a bit between sub architectures, so this makes the ppc32 port cleaner as well. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200429062421.1675400-1-npiggin@gmail.com
2020-05-28powerpc/64s/radix: Don't prefetch DAR in update_mmu_cacheNicholas Piggin
The idea behind this prefetch was to kick off a page table walk before returning from the fault, getting some pipelining advantage. But this never showed up any noticable performance advantage, and in fact with KUAP the prefetches are actually blocked and cause some kind of micro-architectural fault. Removing this improves page fault microbenchmark performance by about 9%. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Keep the early return in update_mmu_cache()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200504122907.49304-1-npiggin@gmail.com
2020-05-28powerpc/xive: Do not expose a debugfs file when XIVE is disabledCédric Le Goater
The XIVE interrupt mode can be disabled with the "xive=off" kernel parameter, in which case there is nothing to present to the user in the associated /sys/kernel/debug/powerpc/xive file. Fixes: 930914b7d528 ("powerpc/xive: Add a debugfs file to dump internal XIVE state") Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200429075122.1216388-4-clg@kaod.org
2020-05-28KVM: arm64: Drop obsolete comment about sys_reg orderingMarc Zyngier
The general comment about keeping the enum order in sync with the save/restore code has been obsolete for many years now. Just drop it. Note that there are other ordering requirements in the enum, such as the PtrAuth and PMU registers, which are still valid. Reported-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-05-28KVM: arm64: Parametrize exception entry with a target ELMarc Zyngier
We currently assume that an exception is delivered to EL1, always. Once we emulate EL2, this no longer will be the case. To prepare for this, add a target_mode parameter. While we're at it, merge the computing of the target PC and PSTATE in a single function that updates both PC and CPSR after saving their previous values in the corresponding ELR/SPSR. This ensures that they are updated in the correct order (a pretty common source of bugs...). Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-05-28MIPS: Loongson64: Remove not used pci.cTiezhu Yang
After commit 6423e59a64e7 ("MIPS: Loongson64: Switch to generic PCI driver"), arch/mips/loongson64/pci.c is not used any more, remove it. Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-05-28Merge tag 'keystone_dts_for_5.8' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone into arm/dt ARM: dts: Keystone update for v5.8 - Rename "msmram" node to "sram" * tag 'keystone_dts_for_5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone: ARM: dts: keystone: Rename "msmram" node to "sram" Link: https://lore.kernel.org/r/1590638489-12023-2-git-send-email-santosh.shilimkar@oracle.com Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-05-28arm64/kernel: Fix return value when cpu_online() fails in __cpu_up()Nobuhiro Iwamatsu
If boot_secondary() was successful, and cpu_online() was an error in __cpu_up(), -EIO was returned, but 0 is returned by commit d22b115cbfbb7 ("arm64/kernel: Simplify __cpu_up() by bailing out early"). Therefore, bringup_wait_for_ap() causes the primary core to wait for a long time, which may cause boot failure. This commit sets -EIO to return code under the same conditions. Fixes: d22b115cbfbb ("arm64/kernel: Simplify __cpu_up() by bailing out early") Signed-off-by: Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp> Tested-by: Yuji Ishikawa <yuji2.ishikawa@toshiba.co.jp> Acked-by: Will Deacon <will@kernel.org> Cc: Gavin Shan <gshan@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200527233457.2531118-1-nobuhiro1.iwamatsu@toshiba.co.jp [catalin.marinas@arm.com: return -EIO at the end of the function] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-05-28KVM: arm64: Don't use empty structures as CPU reset stateMarc Zyngier
Keeping empty structure as the vcpu state initializer is slightly wasteful: we only want to set pstate, and zero everything else. Just do that. Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-05-28KVM: arm64: Move sysreg reset check to boot timeMarc Zyngier
Our sysreg reset check has become a bit silly, as it only checks whether a reset callback actually exists for a given sysreg entry, and apply the method if available. Doing the check at each vcpu reset is pretty dumb, as the tables never change. It is thus perfectly possible to do the same checks at boot time. This also allows us to introduce a sparse sys_regs[] array, something that will be required with ARMv8.4-NV. Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-05-28KVM: arm64: Add missing reset handlers for PMU emulationMarc Zyngier
As we're about to become a bit more harsh when it comes to the lack of reset callbacks, let's add the missing PMU reset handlers. Note that these only cover *CLR registers that were always covered by their *SET counterpart, so there is no semantic change here. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-05-28KVM: arm64: Refactor vcpu_{read,write}_sys_regMarc Zyngier
Extract the direct HW accessors for later reuse. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-05-28KVM: arm64: vgic-v3: Take cpu_if pointer directly instead of vcpuChristoffer Dall
If we move the used_lrs field to the version-specific cpu interface structure, the following functions only operate on the struct vgic_v3_cpu_if and not the full vcpu: __vgic_v3_save_state __vgic_v3_restore_state __vgic_v3_activate_traps __vgic_v3_deactivate_traps __vgic_v3_save_aprs __vgic_v3_restore_aprs This is going to be very useful for nested virt, so move the used_lrs field and change the prototypes and implementations of these functions to take the cpu_if parameter directly. No functional change. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-05-28ARM: davinci: fix build failure without I2CArnd Bergmann
The two supplies are referenced outside of #ifdef CONFIG_I2C but defined inside, which breaks the build if that is not built-in: mach-davinci/board-dm644x-evm.c:861:21: error: use of undeclared identifier 'fixed_supplies_1_8v' ARRAY_SIZE(fixed_supplies_1_8v), 1800000); ^ mach-davinci/board-dm644x-evm.c:861:21: error: use of undeclared identifier 'fixed_supplies_1_8v' mach-davinci/board-dm644x-evm.c:861:21: error: use of undeclared identifier 'fixed_supplies_1_8v' mach-davinci/board-dm644x-evm.c:860:49: error: use of undeclared identifier 'fixed_supplies_1_8v' regulator_register_always_on(0, "fixed-dummy", fixed_supplies_1_8v, I don't know if the regulators are used anywhere without I2C, but always registering them seems to be the safe choice here. On a related note, it might be best to also deal with CONFIG_I2C=m across the file, unless this is going to be moved to DT and removed really soon anyway. Link: https://lore.kernel.org/r/20200527133746.643895-1-arnd@arndb.de Fixes: 5e06d19694a4 ("ARM: davinci: dm644x-evm: Add Fixed regulators needed for tlv320aic33") Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-05-28Revert "ARM: vexpress: Don't select VEXPRESS_CONFIG"Anders Roxell
This reverts commit 848685c25da99d871bbd87369f3c3d6eead661ac. Due to when I set 'depends on VEXPRESS_CONFOG=Y' in 'config POWER_RESET_VEXPRESS' to get an allmodconfig build on arm64 to build, and allmodconfig build on arm fails if this patch isn't reverted. Link: https://lore.kernel.org/r/20200527112608.3886105-4-anders.roxell@linaro.org Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-05-28s390/pci: Log new handle in clp_disable_fh()Petr Tesarik
After disabling a function, the original handle is logged instead of the disabled handle. Link: https://lkml.kernel.org/r/20200522183922.5253-1-ptesarik@suse.com Fixes: 17cdec960cf7 ("s390/pci: Recover handle in clp_set_pci_fn()") Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Petr Tesarik <ptesarik@suse.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2020-05-28s390/cio, s390/qeth: cleanup PNSO CHSCAlexandra Winter
CHSC3D (PNSO - perform network subchannel operation) is used for OC0 (Store-network-bridging-information) as well as for OC3 (Store-network-address-information). So common fields are renamed from *brinfo* to *pnso*. Also *_bridge_host_* is changed into *_addr_change_*, e.g. qeth_bridge_host_event to qeth_addr_change_event, for the same reasons. The keywords in the card traces are changed accordingly. Remove unused L3 types, as PNSO will only return Layer2 entries. Make PNSO CHSC implementation more consistent with existing API usage: Add new function ccw_device_pnso() to drivers/s390/cio/device_ops.c and the function declaration to arch/s390/include/asm/ccwdev.h, which takes a struct ccw_device * as parameter instead of schid and calls chsc_pnso(). PNSO CHSC has no strict relationship to qdio. So move the calling function from qdio to qeth_l2 and move the necessary structures to a new file arch/s390/include/asm/chsc.h. Do response code evaluation only in chsc_error_from_response() and use return code in all other places. qeth_anset_makerc() was meant to evaluate the PNSO response code, but never did, because pnso_rc was already non-zero. Indentation was corrected in some places. Signed-off-by: Alexandra Winter <wintera@linux.ibm.com> Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com> Reviewed-by: Vineeth Vijayan <vneethv@linux.ibm.com> Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2020-05-28s390: remove critical section cleanup from entry.SSven Schnelle
The current code is rather complex and caused a lot of subtle and hard to debug bugs in the past. Simplify the code by calling the system_call handler with interrupts disabled, save machine state, and re-enable them later. This requires significant changes to the machine check handling code as well. When the machine check interrupt arrived while being in kernel mode the new code will signal pending machine checks with a SIGP external call. When userspace was interrupted, the handler will switch to the kernel stack and directly execute s390_handle_mcck(). Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2020-05-28s390: add machine check SIGPSven Schnelle
This will be used with the upcoming entry.S changes to signal that there's a machine check pending that cannot be handled in the Machine check handler itself. Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2020-05-28m68k: coldfire/clk.c: move m5441x specific codeAngelo Dureghello
Moving specific m5441x clk-related code in more appropriate location, since breaking compilation for other targets. Signed-off-by: Angelo Dureghello <angelo.dureghello@timesys.com> Link: https://lore.kernel.org/r/20200525102324.2723438-1-angelo.dureghello@timesys.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2020-05-28m68k: mcf5441x: add support for esdhc mmc controllerAngelo Dureghello
Add support for sdhci-edshc mmc controller. Signed-off-by: Angelo Dureghello <angelo.dureghello@timesys.com> Acked-by: Greg Ungerer <gerg@linux-m68k.org> Link: https://lore.kernel.org/r/20200518191742.1251440-1-angelo.dureghello@timesys.com Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2020-05-28Merge branch 'core/rcu' into sched/core, to pick up dependencyIngo Molnar
We are going to rely on the loosening of RCU callback semantics, introduced by this commit: 806f04e9fd2c: ("rcu: Allow for smp_call_function() running callbacks from idle") Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-05-28MIPS: Loongson64: Define PCI_IOBASEJiaxun Yang
PCI_IOBASE is used to create VM maps for PCI I/O ports, it is required by generic PCI drivers to make memory mapped I/O range work. To deal with legacy drivers that have fixed I/O ports range we reserved 0x10000 in PCI_IOBASE, should be enough for i8259 i8042 stuff. Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-05-28MIPS: CPU_LOONGSON2EF need software to maintain cache consistencyLichao Liu
CPU_LOONGSON2EF need software to maintain cache consistency, so modify the 'cpu_needs_post_dma_flush' function to return true when the cpu type is CPU_LOONGSON2EF. Cc: stable@vger.kernel.org Signed-off-by: Lichao Liu <liulichao@loongson.cn> Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-05-28MIPS: DTS: Fix build errors used with various configsTiezhu Yang
If CONFIG_MIPS_MALTA is not set but CONFIG_LEGACY_BOARD_SEAD3 is set, the subdir arch/mips/boot/dts/mti will not be built, so the sead3.dts which depends on CONFIG_LEGACY_BOARD_SEAD3 in this subdir is also not built, and then there exists the following build error, fix it. LD .tmp_vmlinux.kallsyms1 arch/mips/generic/board-sead3.o:(.mips.machines.init+0x4): undefined reference to `__dtb_sead3_begin' Makefile:1106: recipe for target 'vmlinux' failed make: *** [vmlinux] Error 1 Additionally, add CONFIG_FIT_IMAGE_FDT_BOSTON check for subdir img to fix the following build error when CONFIG_MACH_PISTACHIO is not set but CONFIG_FIT_IMAGE_FDT_BOSTON is set. FATAL ERROR: Couldn't open "boot/dts/img/boston.dtb": No such file or directory Reported-by: kbuild test robot <lkp@intel.com> Reported-by: Guenter Roeck <linux@roeck-us.net> Fixes: 41528ba6afe6 ("MIPS: DTS: Only build subdir of current platform") Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-05-28perf/x86/rapl: Add AMD Fam17h RAPL supportStephane Eranian
This patch enables AMD Fam17h RAPL support for the Package level metric. The support is as per AMD Fam17h Model31h (Zen2) and model 00-ffh (Zen1) PPR. The same output is available via the energy-pkg pseudo event: $ perf stat -a -I 1000 --per-socket -e power/energy-pkg/ Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200527224659.206129-6-eranian@google.com
2020-05-28perf/x86/rapl: Make perf_probe_msr() more robust and flexibleStephane Eranian
This patch modifies perf_probe_msr() by allowing passing of struct perf_msr array where some entries are not populated, i.e., they have either an msr address of 0 or no attribute_group pointer. This helps with certain call paths, e.g., RAPL. In case the grp is NULL, the default sysfs visibility rule applies which is to make the group visible. Without the patch, you would get a kernel crash with a NULL group. Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200527224659.206129-5-eranian@google.com
2020-05-28perf/x86/rapl: Flip logic on default events visibilityStephane Eranian
This patch modifies the default visibility of the attribute_group for each RAPL event. By default if the grp.is_visible field is NULL, sysfs considers that it must display the attribute group. If the field is not NULL (callback function), then the return value of the callback determines the visibility (0 = not visible). The RAPL attribute groups had the field set to NULL, meaning that unless they failed the probing from perf_msr_probe(), they would be visible. We want to avoid having to specify attribute groups that are not supported by the HW in the rapl_msrs[] array, they don't have an MSR address to begin with. Therefore, we intialize the visible field of all RAPL attribute groups to a callback that returns 0. If the RAPL msr goes through probing and succeeds the is_visible field will be set back to NULL (visible). If the probing fails the field is set to a callback that return 0 (not visible). Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200527224659.206129-4-eranian@google.com
2020-05-28perf/x86/rapl: Refactor to share the RAPL code between Intel and AMD CPUsStephane Eranian
This patch modifies the rapl_model struct to include architecture specific knowledge in this previously Intel specific structure, and in particular it adds the MSR for POWER_UNIT and the rapl_msrs array. No functional changes. Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200527224659.206129-3-eranian@google.com
2020-05-28perf/x86/rapl: Move RAPL support to common x86 codeStephane Eranian
To prepare for support of both Intel and AMD RAPL. As per the AMD PPR, Fam17h support Package RAPL counters to monitor power usage. The RAPL counter operates as with Intel RAPL, and as such it is beneficial to share the code. No change in functionality. Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200527224659.206129-2-eranian@google.com
2020-05-28Merge tag 'v5.7-rc7' into perf/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-05-27ARM: dts: keystone: Rename "msmram" node to "sram"Krzysztof Kozlowski
The device node name should reflect generic class of a device so rename the "msmram" node and its children to "sram". This will be also in sync with upcoming DT schema. No functional change. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
2020-05-28KVM: PPC: Book3S HV: Close race with page faults around memslot flushesPaul Mackerras
There is a potential race condition between hypervisor page faults and flushing a memslot. It is possible for a page fault to read the memslot before a memslot is updated and then write a PTE to the partition-scoped page tables after kvmppc_radix_flush_memslot has completed. (Note that this race has never been explicitly observed.) To close this race, it is sufficient to increment the MMU sequence number while the kvm->mmu_lock is held. That will cause mmu_notifier_retry() to return true, and the page fault will then return to the guest without inserting a PTE. Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-05-27clk: mmp2: Add support for power islandsLubomir Rintel
Apart from the clocks and resets, the PMU hardware also controls power to peripherals that are on separate power islands. On MMP2, that's the GC860 GPU and the SSPA audio interface, while on MMP3 also the camera interface is on a separate island, along with the pair of GC2000 and GC300 GPUs and the SSPA. Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Link: https://lkml.kernel.org/r/20200519224151.2074597-12-lkundrak@v3.sk Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2020-05-28KVM: PPC: Book3S HV: Remove user-triggerable WARN_ONPaul Mackerras
Although in general we do not expect valid PTEs to be found in kvmppc_create_pte when we are inserting a large page mapping, there is one situation where this can occur. That is when dirty page logging is turned off for a memslot while the VM is running. Because the new memslots are installed before the old memslot is flushed in kvmppc_core_commit_memory_region_hv(), there is a window where a hypervisor page fault can try to install a 2MB (or 1GB) page where there are already small page mappings which were installed while dirty page logging was enabled and which have not yet been flushed. Since we have a situation where valid PTEs can legitimately be found by kvmppc_unmap_free_pte, and which can be triggered by userspace, just remove the WARN_ON_ONCE, since it is undesirable to have userspace able to trigger a kernel warning. Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-05-28csky: Fixup CONFIG_DEBUG_RSEQGuo Ren
Put the rseq_syscall check point at the prologue of the syscall will break the a0 ... a7. This will casue system call bug when DEBUG_RSEQ is enabled. So move it to the epilogue of syscall, but before syscall_trace. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-28csky: Coding convention in entry.SGuo Ren
There is no fixup or feature in the patch, we only cleanup with: - Remove unnecessary reg used (r11, r12), just use r9 & r10 & syscallid regs as temp useage. - Add _TIF_SYSCALL_WORK and _TIF_WORK_MASK to gather macros. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-28csky: Fixup abiv2 syscall_trace break a4 & a5Guo Ren
Current implementation could destory a4 & a5 when strace, so we need to get them from pt_regs by SAVE_ALL. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-28csky: Fixup CONFIG_PREEMPT panicGuo Ren
log: [    0.13373200] Calibrating delay loop... [    0.14077600] ------------[ cut here ]------------ [    0.14116700] WARNING: CPU: 0 PID: 0 at kernel/sched/core.c:3790 preempt_count_add+0xc8/0x11c [    0.14348000] DEBUG_LOCKS_WARN_ON((preempt_count() < 0))Modules linked in: [    0.14395100] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.6.0 #7 [    0.14410800] [    0.14427400] Call Trace: [    0.14450700] [<807cd226>] dump_stack+0x8a/0xe4 [    0.14473500] [<80072792>] __warn+0x10e/0x15c [    0.14495900] [<80072852>] warn_slowpath_fmt+0x72/0xc0 [    0.14518600] [<800a5240>] preempt_count_add+0xc8/0x11c [    0.14544900] [<807ef918>] _raw_spin_lock+0x28/0x68 [    0.14572600] [<800e0eb8>] vprintk_emit+0x84/0x2d8 [    0.14599000] [<800e113a>] vprintk_default+0x2e/0x44 [    0.14625100] [<800e2042>] vprintk_func+0x12a/0x1d0 [    0.14651300] [<800e1804>] printk+0x30/0x48 [    0.14677600] [<80008052>] lockdep_init+0x12/0xb0 [    0.14703800] [<80002080>] start_kernel+0x558/0x7f8 [    0.14730000] [<800052bc>] csky_start+0x58/0x94 [    0.14756600] irq event stamp: 34 [    0.14775100] hardirqs last  enabled at (33): [<80067370>] ret_from_exception+0x2c/0x72 [    0.14793700] hardirqs last disabled at (34): [<800e0eae>] vprintk_emit+0x7a/0x2d8 [    0.14812300] softirqs last  enabled at (32): [<800655b0>] __do_softirq+0x578/0x6d8 [    0.14830800] softirqs last disabled at (25): [<8007b3b8>] irq_exit+0xec/0x128 The preempt_count of reg could be destroyed after csky_do_IRQ without reload from memory. After reference to other architectures (arm64, riscv), we move preempt entry into ret_from_exception and disable irq at the beginning of ret_from_exception instead of RESTORE_ALL. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Reported-by: Lu Baoquan <lu.baoquan@intellif.com>
2020-05-27x86/PCI: Drop unused xen_register_pirq() gsi_override parameterWei Liu
All callers of xen_register_pirq() pass -1 (no override) for the gsi_override parameter. Remove it and related code. Link: https://lore.kernel.org/r/20200428153640.76476-1-wei.liu@kernel.org Signed-off-by: Wei Liu <wei.liu@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2020-05-27copy_xstate_to_kernel(): don't leave parts of destination uninitializedAl Viro
copy the corresponding pieces of init_fpstate into the gaps instead. Cc: stable@kernel.org Tested-by: Alexander Potapenko <glider@google.com> Acked-by: Borislav Petkov <bp@suse.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-05-27KVM: x86: track manually whether an event has been injectedPaolo Bonzini
Instead of calling kvm_event_needs_reinjection, track its future return value in a variable. This will be useful in the next patch. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-27KVM: nSVM: Preserve registers modifications done before nested_svm_vmexit()Vitaly Kuznetsov
L2 guest hang is observed after 'exit_required' was dropped and nSVM switched to check_nested_events() completely. The hang is a busy loop when e.g. KVM is emulating an instruction (e.g. L2 is accessing MMIO space and we drop to userspace). After nested_svm_vmexit() and when L1 is doing VMRUN nested guest's RIP is not advanced so KVM goes into emulating the same instruction which caused nested_svm_vmexit() and the loop continues. nested_svm_vmexit() is not new, however, with check_nested_events() we're now calling it later than before. In case by that time KVM has modified register state we may pick stale values from VMCB when trying to save nested guest state to nested VMCB. nVMX code handles this case correctly: sync_vmcs02_to_vmcs12() called from nested_vmx_vmexit() does e.g 'vmcs12->guest_rip = kvm_rip_read(vcpu)' and this ensures KVM-made modifications are preserved. Do the same for nSVM. Generally, nested_vmx_vmexit()/nested_svm_vmexit() need to pick up all nested guest state modifications done by KVM after vmexit. It would be great to find a way to express this in a way which would not require to manually track these changes, e.g. nested_{vmcb,vmcs}_get_field(). Co-debugged-with: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20200527090102.220647-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-27KVM: x86: Initialize tdp_level during vCPU creationSean Christopherson
Initialize vcpu->arch.tdp_level during vCPU creation to avoid consuming garbage if userspace calls KVM_RUN without first calling KVM_SET_CPUID. Fixes: e93fd3b3e89e9 ("KVM: x86/mmu: Capture TDP level when updating CPUID") Reported-by: syzbot+904752567107eefb728c@syzkaller.appspotmail.com Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200527085400.23759-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>