summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2020-03-25powerpc/cell: Use fallthrough;Joe Perches
Convert the various uses of fallthrough comments to fallthrough; Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/03073a9a269010ca439e9e658629c44602b0cc9f.1583896348.git.joe@perches.com
2020-03-25powerpc/sstep: Fix DS operand in ld encoding to appropriate valueBalamuruhan S
ld instruction should have 14 bit immediate field (DS) concatenated with 0b00 on the right, encode it accordingly. Introduce macro `IMM_DS()` to encode DS form instructions with 14 bit immediate field. Fixes: 4ceae137bdab ("powerpc: emulate_step() tests for load/store instructions") Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200311102405.392263-1-bala24@linux.ibm.com
2020-03-25powerpc/pseries: Fix of_read_drc_info_cell() to point at next recordTyrel Datwyler
The expectation is that when calling of_read_drc_info_cell() repeatedly to parse multiple drc-info records that the in/out curval parameter points at the start of the next record on return. However, the current behavior has curval still pointing at the final value of the record just parsed. The result of which is that if the ibm,drc-info property contains multiple properties the parsed value of the drc_type for any record after the first has the power_domain value of the previous record appended to the type string. eg: observed the following 0xffffffff prepended to PHB drc-info: type: \xff\xff\xff\xffPHB, prefix: PHB , index_start: 0x20000001 drc-info: suffix_start: 1, sequential_elems: 3072, sequential_inc: 1 drc-info: power-domain: 0xffffffff, last_index: 0x20000c00 In practice PHBs are the only type of connector in the ibm,drc-info property that has multiple records. So, it breaks PHB hotplug, but by chance not PCI, CPU, slot, or memory because they happen to only ever be a single record. Fix by incrementing curval past the power_domain value to point at drc_type string of next record. Fixes: e83636ac3334 ("pseries/drc-info: Search DRC properties for CPU indexes") Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Acked-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200307024547.5748-1-tyreld@linux.ibm.com
2020-03-24x86/cpu: Cleanup the now unused CPU match macrosThomas Gleixner
No more users. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131510.900226233@linutronix.de
2020-03-24crypto: Convert to new CPU match macrosThomas Gleixner
The new macro set has a consistent namespace and uses C99 initializers instead of the grufty C89 ones. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131510.700250889@linutronix.de
2020-03-24x86/platform: Convert to new CPU match macrosThomas Gleixner
The new macro set has a consistent namespace and uses C99 initializers instead of the grufty C89 ones. Get rid the of the local macro wrappers for consistency. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131509.359448901@linutronix.de
2020-03-24x86/kernel: Convert to new CPU match macrosThomas Gleixner
The new macro set has a consistent namespace and uses C99 initializers instead of the grufty C89 ones. Get rid the of the local macro wrappers for consistency. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131509.250559388@linutronix.de
2020-03-24x86/kvm: Convert to new CPU match macrosThomas Gleixner
The new macro set has a consistent namespace and uses C99 initializers instead of the grufty C89 ones. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131509.136884777@linutronix.de
2020-03-24x86/perf/events: Convert to new CPU match macrosThomas Gleixner
The new macro set has a consistent namespace and uses C99 initializers instead of the grufty C89 ones. Get rid the of the local macro wrappers for consistency. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131509.029267418@linutronix.de
2020-03-24x86/cpu/bugs: Convert to new matching macrosThomas Gleixner
The new macro set has a consistent namespace and uses C99 initializers instead of the grufty C89 ones. The local wrappers have to stay as they are tailored to tame the hardware vulnerability mess. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131508.934926587@linutronix.de
2020-03-24x86/cpu: Add consistent CPU match macrosThomas Gleixner
Finding all places which build x86_cpu_id match tables is tedious and the logic is hidden in lots of differently named macro wrappers. Most of these initializer macros use plain C89 initializers which rely on the ordering of the struct members. So new members could only be added at the end of the struct, but that's ugly as hell and C99 initializers are really the right thing to use. Provide a set of macros which: - Have a proper naming scheme, starting with X86_MATCH_ - Use C99 initializers The set of provided macros are all subsets of the base macro X86_MATCH_VENDOR_FAM_MODEL_FEATURE() which allows to supply all possible selection criteria: vendor, family, model, feature The other macros shorten this to avoid typing all arguments when they are not needed and would require one of the _ANY constants. They have been created due to the requirements of the existing usage sites. Also add a few model constants for Centaur CPUs and QUARK. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131508.826011988@linutronix.de
2020-03-24x86/devicetable: Move x86 specific macro out of generic codeThomas Gleixner
There is no reason that this gunk is in a generic header file. The wildcard defines need to stay as they are required by file2alias. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/20200320131508.736205164@linutronix.de
2020-03-24Merge tag 'kvm-arm-removal' into kvmarm-master/nextMarc Zyngier
Goodbye KVM/arm Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-03-24arm64: Introduce get_cpu_ops() helper functionGavin Shan
This introduces get_cpu_ops() to return the CPU operations according to the given CPU index. For now, it simply returns the @cpu_ops[cpu] as before. Also, helper function __cpu_try_die() is introduced to be shared by cpu_die() and ipi_cpu_crash_stop(). So it shouldn't introduce any functional changes. Signed-off-by: Gavin Shan <gshan@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com>
2020-03-24arm64: Rename cpu_read_ops() to init_cpu_ops()Gavin Shan
This renames cpu_read_ops() to init_cpu_ops() as the function is only called in initialization phase. Also, we will introduce get_cpu_ops() in the subsequent patches, to retireve the CPU operation by the given CPU index. The usage of cpu_read_ops() and get_cpu_ops() are difficult to be distinguished from their names. Signed-off-by: Gavin Shan <gshan@redhat.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-24arm64: Declare ACPI parking protocol CPU operation if neededGavin Shan
It's obvious we needn't declare the corresponding CPU operation when CONFIG_ARM64_ACPI_PARKING_PROTOCOL is disabled, even it doesn't cause any compiling warnings. Signed-off-by: Gavin Shan <gshan@redhat.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-24Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fix from Ingo Molnar: "A build fix with certain Kconfig combinations" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/ioremap: Fix CONFIG_EFI=n build
2020-03-24MIPS: Alchemy: remove no longer used au1xxx_ide.h headerBartlomiej Zolnierkiewicz
Since the only user of this header (au1xxx-ide IDE host driver) is now gone it can also be removed. Acked-by: Paul Burton <paulburton@kernel.org> Acked-by: Manuel Lauss <manuel.lauss@gmail.com> Acked-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-03-24KVM: arm64: GICv4.1: Allow non-trapping WFI when using HW SGIsMarc Zyngier
Just like for VLPIs, it is beneficial to avoid trapping on WFI when the vcpu is using the GICv4.1 SGIs. Add such a check to vcpu_clear_wfx_traps(). Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-23-maz@kernel.org
2020-03-24KVM: arm64: GICv4.1: Reload VLPI configuration on distributor enable/disableMarc Zyngier
Each time a Group-enable bit gets flipped, the state of these bits needs to be forwarded to the hardware. This is a pretty heavy handed operation, requiring all vcpus to reload their GICv4 configuration. It is thus implemented as a new request type. These enable bits are programmed into the HW by setting the VGrp{0,1}En fields of GICR_VPENDBASER when the vPEs are made resident again. Of course, we only support Group-1 for now... Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-22-maz@kernel.org
2020-03-24arm64: move kimage_vaddr to .rodataRemi Denis-Courmont
This datum is not referenced from .idmap.text: it does not need to be mapped in idmap. Lets move it to .rodata as it is never written to after early boot of the primary CPU. (Maybe .data.ro_after_init would be cleaner though?) Signed-off-by: Rémi Denis-Courmont <remi@remlab.net> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-24arm64: use mov_q instead of literal ldrRemi Denis-Courmont
In practice, this requires only 2 instructions, or even only 1 for the idmap_pg_dir size (with 4 or 64 KiB pages). Only the MAIR values needed more than 2 instructions and it was already converted to mov_q by 95b3f74bec203804658e17f86fe20755bb8abcb9. Signed-off-by: Remi Denis-Courmont <remi.denis.courmont@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com>
2020-03-24ARM: dts: tango4: Make /serial compatible with ns16550aLubomir Rintel
ralink,rt2880-uart is compatible with ns16550a and all other instances of RT2880 UART nodes include it in the compatible property. Add it also here, to make the binding schema simpler. Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Reviewed-by: Rob Herring <robh@kernel.org> Acked-by: Mans Rullgard <mans@mansr.com> Link: https://lore.kernel.org/r/20200320174107.29406-8-lkundrak@v3.sk Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-24ARM: dts: mmp*: Make the serial ports compatible with xscale-uartLubomir Rintel
XScale serial port driver is perfectly capable of supporting this hardware. A separate compatible string is probably a historical mess. Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200320174107.29406-7-lkundrak@v3.sk Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-24ARM: dts: mmp*: Fix serial port namesLubomir Rintel
A preferred node name for serial ports is "serial": mmp2-olpc-xo-1-75.dt.yaml: uart@d4030000: $nodename:0: 'uart@d4030000' does not match '^serial(@[0-9a-f,]+)*$' ... Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200320174107.29406-6-lkundrak@v3.sk Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-24ARM: dts: mmp2-brownstone: Don't redeclare phandle referencesLubomir Rintel
Extend the nodes by their phandle references instead of recreating the tree and declaring references of the same names. Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200320174107.29406-5-lkundrak@v3.sk Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-24ARM: dts: pxa*: Make the serial ports compatible with xscale-uartLubomir Rintel
Some drivers that claim to support mrvl,mmp-uart default to a reg-shift of two, some don't. Be explicit to be on a safe side. With that in place, a XScale serial port driver is perfectly capable of supporting the MMP serial port. Add a compatible string. Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200320174107.29406-4-lkundrak@v3.sk Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-24ARM: dts: pxa*: Fix serial port namesLubomir Rintel
There's a preferred node name for serial ports, and it's not "uart": pxa910-dkb.dt.yaml: uart@d4017000: $nodename:0: 'uart@d4017000' does not match '^serial(@[0-9a-f,]+)*$' ... Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200320174107.29406-3-lkundrak@v3.sk Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-24ARM: dts: pxa*: Don't redeclare phandle referencesLubomir Rintel
Extend the nodes by their phandle references instead of recreating the tree and declaring references of the same names. Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20200320174107.29406-2-lkundrak@v3.sk Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-03-24KVM: LAPIC: Also cancel preemption timer when disarm LAPIC timerWanpeng Li
The timer is disarmed when switching between TSC deadline and other modes, we should set everything to disarmed state, however, LAPIC timer can be emulated by preemption timer, it still works if vmx->hv_deadline_timer is not -1. This patch also cancels preemption timer when disarm LAPIC timer. Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Message-Id: <1585031530-19823-1-git-send-email-wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-24arm: Remove the ability to set HYP vectors outside of the decompressorMarc Zyngier
Although we have to bounce between HYP and SVC to decompress and relocate the kernel, we don't need to be able to use it in the kernel itself. So let's drop the functionnality. Since the vectors are never changed, there is no need to reset them either, and nobody calls that stub anyway. The last function (SOFT_RESTART) is still present in order to support kexec. Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-03-24arm: Remove GICv3 vgic compatibility macrosMarc Zyngier
We used to use a set of macros to provide support of vgic-v3 to 32bit without duplicating everything. We don't need it anymore, so drop it. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
2020-03-24arm: Remove HYP/Stage-2 page-table supportMarc Zyngier
Remove all traces of Stage-2 and HYP page table support. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
2020-03-24arm: Remove 32bit KVM host supportMarc Zyngier
That's it. Remove all references to KVM itself, and document that although it is no more, the ABI between SVC and HYP still exists. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
2020-03-24arm: Remove KVM from config filesMarc Zyngier
Only one platform is building KVM by default. How crazy! Remove it whilst nobody is watching. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
2020-03-24arm: Unplug KVM from the build systemMarc Zyngier
As we're about to drop KVM/arm on the floor, carefully unplug it from the build system. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
2020-03-24x86/vmware: Use bool type for vmw_sched_clockAlexey Makhalov
To be aligned with other bool variables. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200323195707.31242-6-amakhalov@vmware.com
2020-03-24x86/vmware: Enable steal time accountingAlexey Makhalov
Set paravirt_steal_rq_enabled if steal clock present. paravirt_steal_rq_enabled is used in sched/core.c to adjust task progress by offsetting stolen time. Use 'no-steal-acc' off switch (share same name with KVM) to disable steal time accounting. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200323195707.31242-5-amakhalov@vmware.com
2020-03-24x86/vmware: Add steal time clock support for VMware guestsAlexey Makhalov
Steal time is the amount of CPU time needed by a guest virtual machine that is not provided by the host. Steal time occurs when the host allocates this CPU time elsewhere, for example, to another guest. Steal time can be enabled by adding the VM configuration option stealclock.enable = "TRUE". It is supported by VMs that run hardware version 13 or newer. Introduce the VMware steal time infrastructure. The high level code (such as enabling, disabling and hot-plug routines) was derived from KVM. [ Tomer: use READ_ONCE macros and 32bit guests support. ] [ bp: Massage. ] Co-developed-by: Tomer Zeltzer <tomerr90@gmail.com> Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Tomer Zeltzer <tomerr90@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200323195707.31242-4-amakhalov@vmware.com
2020-03-24x86/vmware: Remove vmware_sched_clock_setup()Alexey Makhalov
Move cyc2ns setup logic to separate function. This separation will allow to use cyc2ns mult/shift pair not only for the sched_clock but also for other clocks such as steal_clock. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200323195707.31242-3-amakhalov@vmware.com
2020-03-24x86/vmware: Make vmware_select_hypercall() __initAlexey Makhalov
vmware_select_hypercall() is used only by the __init functions, and should be annotated with __init as well. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200323195707.31242-2-amakhalov@vmware.com
2020-03-24KVM: PPC: Book3S HV: H_SVM_INIT_START must call UV_RETURNLaurent Dufour
When the call to UV_REGISTER_MEM_SLOT is failing, for instance because there is not enough free secured memory, the Hypervisor (HV) has to call UV_RETURN to report the error to the Ultravisor (UV). Then the UV will call H_SVM_INIT_ABORT to abort the securing phase and go back to the calling VM. If the kvm->arch.secure_guest is not set, in the return path rfid is called but there is no valid context to get back to the SVM since the Hcall has been routed by the Ultravisor. Move the setting of kvm->arch.secure_guest earlier in kvmppc_h_svm_init_start() so in the return path, UV_RETURN will be called instead of rfid. Cc: Bharata B Rao <bharata@linux.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Ram Pai <linuxram@us.ibm.com> Tested-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-24KVM: PPC: Book3S HV: Check caller of H_SVM_* HcallsLaurent Dufour
The Hcall named H_SVM_* are reserved to the Ultravisor. However, nothing prevent a malicious VM or SVM to call them. This could lead to weird result and should be filtered out. Checking the Secure bit of the calling MSR ensure that the call is coming from either the Ultravisor or a SVM. But any system call made from a SVM are going through the Ultravisor, and the Ultravisor should filter out these malicious call. This way, only the Ultravisor is able to make such a Hcall. Cc: Bharata B Rao <bharata@linux.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Ram Pai <linuxram@us.ibnm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-24KVM: PPC: Book3S HV: Skip kvmppc_uvmem_free if Ultravisor is not supportedFabiano Rosas
kvmppc_uvmem_init checks for Ultravisor support and returns early if it is not present. Calling kvmppc_uvmem_free at module exit will cause an Oops: $ modprobe -r kvm-hv Oops: Kernel access of bad area, sig: 11 [#1] <snip> NIP: c000000000789e90 LR: c000000000789e8c CTR: c000000000401030 REGS: c000003fa7bab9a0 TRAP: 0300 Not tainted (5.6.0-rc6-00033-g6c90b86a745a-dirty) MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 24002282 XER: 00000000 CFAR: c000000000dae880 DAR: 0000000000000008 DSISR: 40000000 IRQMASK: 1 GPR00: c000000000789e8c c000003fa7babc30 c0000000016fe500 0000000000000000 GPR04: 0000000000000000 0000000000000006 0000000000000000 c000003faf205c00 GPR08: 0000000000000000 0000000000000001 000000008000002d c00800000ddde140 GPR12: c000000000401030 c000003ffffd9080 0000000000000001 0000000000000000 GPR16: 0000000000000000 0000000000000000 000000013aad0074 000000013aaac978 GPR20: 000000013aad0070 0000000000000000 00007fffd1b37158 0000000000000000 GPR24: 000000014fef0d58 0000000000000000 000000014fef0cf0 0000000000000001 GPR28: 0000000000000000 0000000000000000 c0000000018b2a60 0000000000000000 NIP [c000000000789e90] percpu_ref_kill_and_confirm+0x40/0x170 LR [c000000000789e8c] percpu_ref_kill_and_confirm+0x3c/0x170 Call Trace: [c000003fa7babc30] [c000003faf2064d4] 0xc000003faf2064d4 (unreliable) [c000003fa7babcb0] [c000000000400e8c] dev_pagemap_kill+0x6c/0x80 [c000003fa7babcd0] [c000000000401064] memunmap_pages+0x34/0x2f0 [c000003fa7babd50] [c00800000dddd548] kvmppc_uvmem_free+0x30/0x80 [kvm_hv] [c000003fa7babd80] [c00800000ddcef18] kvmppc_book3s_exit_hv+0x20/0x78 [kvm_hv] [c000003fa7babda0] [c0000000002084d0] sys_delete_module+0x1d0/0x2c0 [c000003fa7babe20] [c00000000000b9d0] system_call+0x5c/0x68 Instruction dump: 3fc2001b fb81ffe0 fba1ffe8 fbe1fff8 7c7f1b78 7c9c2378 3bde4560 7fc3f378 f8010010 f821ff81 486249a1 60000000 <e93f0008> 7c7d1b78 712a0002 40820084 ---[ end trace 5774ef4dc2c98279 ]--- So this patch checks if kvmppc_uvmem_init actually allocated anything before running kvmppc_uvmem_free. Fixes: ca9f4942670c ("KVM: PPC: Book3S HV: Support for running secure guests") Cc: stable@vger.kernel.org # v5.5+ Reported-by: Greg Kurz <groug@kaod.org> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com> Tested-by: Greg Kurz <groug@kaod.org> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-23Merge branch 'linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fix from Herbert Xu: "This fixes a correctness bug in the ARM64 version of ChaCha for lib/crypto used by WireGuard" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: arm64/chacha - correctly walk through blocks
2020-03-23KVM: VMX: Gracefully handle faults on VMXONSean Christopherson
Gracefully handle faults on VMXON, e.g. #GP due to VMX being disabled by BIOS, instead of letting the fault crash the system. Now that KVM uses cpufeatures to query support instead of reading MSR_IA32_FEAT_CTL directly, it's possible for a bug in a different subsystem to cause KVM to incorrectly attempt VMXON[*]. Crashing the system is especially annoying if the system is configured such that hardware_enable() will be triggered during boot. Oppurtunistically rename @addr to @vmxon_pointer and use a named param to reference it in the inline assembly. Print 0xdeadbeef in the ultra-"rare" case that reading MSR_IA32_FEAT_CTL also faults. [*] https://lkml.kernel.org/r/20200226231615.13664-1-sean.j.christopherson@intel.com Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321193751.24985-4-sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23KVM: VMX: Fold loaded_vmcs_init() into alloc_loaded_vmcs()Sean Christopherson
Subsume loaded_vmcs_init() into alloc_loaded_vmcs(), its only remaining caller, and drop the VMCLEAR on the shadow VMCS, which is guaranteed to be NULL. loaded_vmcs_init() was previously used by loaded_vmcs_clear(), but loaded_vmcs_clear() also subsumed loaded_vmcs_init() to properly handle smp_wmb() with respect to VMCLEAR. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321193751.24985-3-sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23KVM: VMX: Always VMCLEAR in-use VMCSes during crash with kexec supportSean Christopherson
VMCLEAR all in-use VMCSes during a crash, even if kdump's NMI shootdown interrupted a KVM update of the percpu in-use VMCS list. Because NMIs are not blocked by disabling IRQs, it's possible that crash_vmclear_local_loaded_vmcss() could be called while the percpu list of VMCSes is being modified, e.g. in the middle of list_add() in vmx_vcpu_load_vmcs(). This potential corner case was called out in the original commit[*], but the analysis of its impact was wrong. Skipping the VMCLEARs is wrong because it all but guarantees that a loaded, and therefore cached, VMCS will live across kexec and corrupt memory in the new kernel. Corruption will occur because the CPU's VMCS cache is non-coherent, i.e. not snooped, and so the writeback of VMCS memory on its eviction will overwrite random memory in the new kernel. The VMCS will live because the NMI shootdown also disables VMX, i.e. the in-progress VMCLEAR will #UD, and existing Intel CPUs do not flush the VMCS cache on VMXOFF. Furthermore, interrupting list_add() and list_del() is safe due to crash_vmclear_local_loaded_vmcss() using forward iteration. list_add() ensures the new entry is not visible to forward iteration unless the entire add completes, via WRITE_ONCE(prev->next, new). A bad "prev" pointer could be observed if the NMI shootdown interrupted list_del() or list_add(), but list_for_each_entry() does not consume ->prev. In addition to removing the temporary disabling of VMCLEAR, open code loaded_vmcs_init() in __loaded_vmcs_clear() and reorder VMCLEAR so that the VMCS is deleted from the list only after it's been VMCLEAR'd. Deleting the VMCS before VMCLEAR would allow a race where the NMI shootdown could arrive between list_del() and vmcs_clear() and thus neither flow would execute a successful VMCLEAR. Alternatively, more code could be moved into loaded_vmcs_init(), but that gets rather silly as the only other user, alloc_loaded_vmcs(), doesn't need the smp_wmb() and would need to work around the list_del(). Update the smp_*() comments related to the list manipulation, and opportunistically reword them to improve clarity. [*] https://patchwork.kernel.org/patch/1675731/#3720461 Fixes: 8f536b7697a0 ("KVM: VMX: provide the vmclear function and a bitmap to support VMCLEAR in kdump") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321193751.24985-2-sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23KVM: x86: Expose fast short REP MOV for supported cpuidZhenyu Wang
For CPU supporting fast short REP MOV (XF86_FEATURE_FSRM) e.g Icelake, Tigerlake, expose it in KVM supported cpuid as well. Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Message-Id: <20200323092236.3703-1-zhenyuw@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23KVM: VMX: don't allow memory operands for inline asm that modifies SPNick Desaulniers
THUNK_TARGET defines [thunk_target] as having "rm" input constraints when CONFIG_RETPOLINE is not set, which isn't constrained enough for this specific case. For inline assembly that modifies the stack pointer before using this input, the underspecification of constraints is dangerous, and results in an indirect call to a previously pushed flags register. In this case `entry`'s stack slot is good enough to satisfy the "m" constraint in "rm", but the inline assembly in handle_external_interrupt_irqoff() modifies the stack pointer via push+pushf before using this input, which in this case results in calling what was the previous state of the flags register, rather than `entry`. Be more specific in the constraints by requiring `entry` be in a register, and not a memory operand. Reported-by: Dmitry Vyukov <dvyukov@google.com> Reported-by: syzbot+3f29ca2efb056a761e38@syzkaller.appspotmail.com Debugged-by: Alexander Potapenko <glider@google.com> Debugged-by: Paolo Bonzini <pbonzini@redhat.com> Debugged-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Message-Id: <20200323191243.30002-1-ndesaulniers@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>