summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2021-03-24Revert "xen: fix p2m size in dom0 for disabled memory hotplug case"Roger Pau Monne
This partially reverts commit 882213990d32 ("xen: fix p2m size in dom0 for disabled memory hotplug case") There's no need to special case XEN_UNPOPULATED_ALLOC anymore in order to correctly size the p2m. The generic memory hotplug option has already been tied together with the Xen hotplug limit, so enabling memory hotplug should already trigger a properly sized p2m on Xen PV. Note that XEN_UNPOPULATED_ALLOC depends on ZONE_DEVICE which pulls in MEMORY_HOTPLUG. Leave the check added to __set_phys_to_machine and the adjusted comment about EXTRA_MEM_RATIO. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20210324122424.58685-3-roger.pau@citrix.com [boris: fixed formatting issues] Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2021-03-24xen/x86: make XEN_BALLOON_MEMORY_HOTPLUG_LIMIT depend on MEMORY_HOTPLUGRoger Pau Monne
The Xen memory hotplug limit should depend on the memory hotplug generic option, rather than the Xen balloon configuration. It's possible to have a kernel with generic memory hotplug enabled, but without Xen balloon enabled, at which point memory hotplug won't work correctly due to the size limitation of the p2m. Rename the option to XEN_MEMORY_HOTPLUG_LIMIT since it's no longer tied to ballooning. Fixes: 9e2369c06c8a18 ("xen: add helpers to allocate unpopulated memory") Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20210324122424.58685-2-roger.pau@citrix.com Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2021-03-24x86/Hyper-V: Support for free page reportingSunil Muthuswamy
Linux has support for free page reporting now (36e66c554b5c) for virtualized environment. On Hyper-V when virtually backed VMs are configured, Hyper-V will advertise cold memory discard capability, when supported. This patch adds the support to hook into the free page reporting infrastructure and leverage the Hyper-V cold memory discard hint hypercall to report/free these pages back to the host. Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com> Tested-by: Matheus Castello <matheus@castello.eng.br> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/SN4PR2101MB0880121FA4E2FEC67F35C1DCC0649@SN4PR2101MB0880.namprd21.prod.outlook.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-24x86/hyperv: Fix unused variable 'hi' warning in hv_apic_readXu Yihang
Fixes the following W=1 kernel build warning(s): arch/x86/hyperv/hv_apic.c:58:15: warning: variable ‘hi’ set but not used [-Wunused-but-set-variable] Compiled with CONFIG_HYPERV enabled: make allmodconfig ARCH=x86_64 CROSS_COMPILE=x86_64-linux-gnu- make W=1 arch/x86/hyperv/hv_apic.o ARCH=x86_64 CROSS_COMPILE=x86_64-linux-gnu- HV_X64_MSR_EOI occupies bit 31:0 and HV_X64_MSR_TPR occupies bit 7:0, which means the higher 32 bits are not really used. Cast the variable hi to void to silence this warning. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Xu Yihang <xuyihang@huawei.com> Link: https://lore.kernel.org/r/20210323025013.191533-1-xuyihang@huawei.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-24x86/hyperv: Fix unused variable 'msr_val' warning in hv_qlock_waitXu Yihang
Fixes the following W=1 kernel build warning(s): arch/x86/hyperv/hv_spinlock.c:28:16: warning: variable ‘msr_val’ set but not used [-Wunused-but-set-variable] unsigned long msr_val; As Hypervisor Top-Level Functional Specification states in chapter 7.5 Virtual Processor Idle Sleep State, "A partition which possesses the AccessGuestIdleMsr privilege (refer to section 4.2.2) may trigger entry into the virtual processor idle sleep state through a read to the hypervisor-defined MSR HV_X64_MSR_GUEST_IDLE". That means only a read of the MSR is necessary. The returned value msr_val is not used. Cast it to void to silence this warning. Reference: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Xu Yihang <xuyihang@huawei.com> Link: https://lore.kernel.org/r/20210323024302.174434-1-xuyihang@huawei.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-24x86/mce/inject: Add IPID for injection tooBorislav Petkov
Add an injection file in order to specify the IPID too when injecting an error. One use case example is using the machinery to decode MCEs collected from other machines. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210314201806.12798-1-bp@alien8.de
2021-03-23x86/setup: Merge several reservations of start of memoryMike Rapoport
Currently, the first several pages are reserved both to avoid leaking their contents on systems with L1TF and to avoid corrupting BIOS memory. Merge the two memory reservations. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210302100406.22059-3-rppt@kernel.org
2021-03-23x86/setup: Consolidate early memory reservationsMike Rapoport
The early reservations of memory areas used by the firmware, bootloader, kernel text and data are spread over setup_arch(). Moreover, some of them happen *after* memblock allocations, e.g trim_platform_memory_ranges() and trim_low_memory_range() are called after reserve_real_mode() that allocates memory. There was no corruption of these memory regions because memblock always allocates memory either from the end of memory (in top-down mode) or above the kernel image (in bottom-up mode). However, the bottom up mode is going to be updated to span the entire memory [1] to avoid limitations caused by KASLR. Consolidate early memory reservations in a dedicated function to improve robustness against future changes. Having the early reservations in one place also makes it clearer what memory must be reserved before memblock allocations are allowed. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Borislav Petkov <bp@suse.de> Acked-by: David Hildenbrand <david@redhat.com> Link: [1] https://lore.kernel.org/lkml/20201217201214.3414100-2-guro@fb.com Link: https://lkml.kernel.org/r/20210302100406.22059-2-rppt@kernel.org
2021-03-23x86/build: Turn off -fcf-protection for realmode targetsArnd Bergmann
The new Ubuntu GCC packages turn on -fcf-protection globally, which causes a build failure in the x86 realmode code: cc1: error: ‘-fcf-protection’ is not compatible with this target Turn it off explicitly on compilers that understand this option. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210323124846.1584944-1-arnd@kernel.org
2021-03-23x86/mem_encrypt: Correct physical address calculation in __set_clr_pte_enc()Isaku Yamahata
The pfn variable contains the page frame number as returned by the pXX_pfn() functions, shifted to the right by PAGE_SHIFT to remove the page bits. After page protection computations are done to it, it gets shifted back to the physical address using page_level_shift(). That is wrong, of course, because that function determines the shift length based on the level of the page in the page table but in all the cases, it was shifted by PAGE_SHIFT before. Therefore, shift it back using PAGE_SHIFT to get the correct physical address. [ bp: Rewrite commit message. ] Fixes: dfaaec9033b8 ("x86: Add support for changing memory encryption attribute in early boot") Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/81abbae1657053eccc535c16151f63cd049dcb97.1616098294.git.isaku.yamahata@intel.com
2021-03-23x86/boot/compressed: Avoid gcc-11 -Wstringop-overread warningArnd Bergmann
GCC gets confused by the comparison of a pointer to an integer literal, with the assumption that this is an offset from a NULL pointer and that dereferencing it is invalid: In file included from arch/x86/boot/compressed/misc.c:18: In function ‘parse_elf’, inlined from ‘extract_kernel’ at arch/x86/boot/compressed/misc.c:442:2: arch/x86/boot/compressed/../string.h:15:23: error: ‘__builtin_memcpy’ reading 64 bytes from a region of size 0 [-Werror=stringop-overread] 15 | #define memcpy(d,s,l) __builtin_memcpy(d,s,l) | ^~~~~~~~~~~~~~~~~~~~~~~ arch/x86/boot/compressed/misc.c:283:9: note: in expansion of macro ‘memcpy’ 283 | memcpy(&ehdr, output, sizeof(ehdr)); | ^~~~~~ I could not find any good workaround for this, but as this is only a warning for a failure during early boot, removing the line entirely works around the warning. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Sebor <msebor@gmail.com> Link: https://lore.kernel.org/r/20210322160253.4032422-2-arnd@kernel.org
2021-03-23x86/boot/tboot: Avoid Wstringop-overread-warningArnd Bergmann
gcc-11 warns about using string operations on pointers that are defined at compile time as offsets from a NULL pointer. Unfortunately that also happens on the result of fix_to_virt(), which is a compile-time constant for a constant input: arch/x86/kernel/tboot.c: In function 'tboot_probe': arch/x86/kernel/tboot.c:70:13: error: '__builtin_memcmp_eq' specified bound 16 exceeds source size 0 [-Werror=stringop-overread] 70 | if (memcmp(&tboot_uuid, &tboot->uuid, sizeof(tboot->uuid))) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I hope this can get addressed in gcc-11 before the release. As a workaround, split up the tboot_probe() function in two halves to separate the pointer generation from the usage. This is a bit ugly, and hopefully gcc understands that the code is actually correct before it learns to peek into the noinline function. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Sebor <msebor@gmail.com> Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99578 Link: https://lore.kernel.org/r/20210322160253.4032422-3-arnd@kernel.org
2021-03-23x86/fpu/math-emu: Fix function cast warningArnd Bergmann
Building with 'make W=1', gcc points out that casting between incompatible function types can be dangerous: arch/x86/math-emu/fpu_trig.c:1638:60: error: cast between incompatible function types from ‘int (*)(FPU_REG *, u_char)’ {aka ‘int (*)(struct fpu__reg *, unsigned char)’} to ‘void (*)(FPU_REG *, u_char)’ {aka ‘void (*)(struct fpu__reg *, unsigned char)’} [-Werror=cast-function-type] 1638 | fprem, fyl2xp1, fsqrt_, fsincos, frndint_, fscale, (FUNC_ST0) fsin, fcos | ^ This one seems harmless, but it is easy enough to work around it by adding an intermediate function that adjusts the return type. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210322214824.974323-1-arnd@kernel.org
2021-03-22x86/microcode: Check for offline CPUs before requesting new microcodeOtavio Pontes
Currently, the late microcode loading mechanism checks whether any CPUs are offlined, and, in such a case, aborts the load attempt. However, this must be done before the kernel caches new microcode from the filesystem. Otherwise, when offlined CPUs are onlined later, those cores are going to be updated through the CPU hotplug notifier callback with the new microcode, while CPUs previously onine will continue to run with the older microcode. For example: Turn off one core (2 threads): echo 0 > /sys/devices/system/cpu/cpu3/online echo 0 > /sys/devices/system/cpu/cpu1/online Install the ucode fails because a primary SMT thread is offline: cp intel-ucode/06-8e-09 /lib/firmware/intel-ucode/ echo 1 > /sys/devices/system/cpu/microcode/reload bash: echo: write error: Invalid argument Turn the core back on echo 1 > /sys/devices/system/cpu/cpu3/online echo 1 > /sys/devices/system/cpu/cpu1/online cat /proc/cpuinfo |grep microcode microcode : 0x30 microcode : 0xde microcode : 0x30 microcode : 0xde The rationale for why the update is aborted when at least one primary thread is offline is because even if that thread is soft-offlined and idle, it will still have to participate in broadcasted MCE's synchronization dance or enter SMM, and in both examples it will execute instructions so it better have the same microcode revision as the other cores. [ bp: Heavily edit and extend commit message with the reasoning behind all this. ] Fixes: 30ec26da9967 ("x86/microcode: Do not upload microcode if CPUs are offline") Signed-off-by: Otavio Pontes <otavio.pontes@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Acked-by: Ashok Raj <ashok.raj@intel.com> Link: https://lkml.kernel.org/r/20210319165515.9240-2-otavio.pontes@intel.com
2021-03-22x86/msr: Fix wr/rdmsr_safe_regs_on_cpu() prototypesArnd Bergmann
gcc-11 warns about mismatched prototypes here: arch/x86/lib/msr-smp.c:255:51: error: argument 2 of type ‘u32 *’ {aka ‘unsigned int *’} declared as a pointer [-Werror=array-parameter=] 255 | int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 *regs) | ~~~~~^~~~ arch/x86/include/asm/msr.h:347:50: note: previously declared as an array ‘u32[8]’ {aka ‘unsigned int[8]’} GCC is right here - fix up the types. [ mingo: Twiddled the changelog. ] Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210322164541.912261-1-arnd@kernel.org
2021-03-21x86: Fix various typos in comments, take #2Ingo Molnar
Fix another ~42 single-word typos in arch/x86/ code comments, missed a few in the first pass, in particular in .S files. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: linux-kernel@vger.kernel.org
2021-03-21x86: Remove unusual Unicode characters from commentsIngo Molnar
We've accumulated a few unusual Unicode characters in arch/x86/ over the years, substitute them with their proper ASCII equivalents. A few of them were a whitespace equivalent: ' ' - the use was harmless. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org
2021-03-21Merge branch 'linus' into x86/cleanups, to resolve conflictIngo Molnar
Conflicts: arch/x86/kernel/kprobes/ftrace.c Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-03-21Merge tag 'perf-urgent-2021-03-21' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "Boundary condition fixes for bugs unearthed by the perf fuzzer" * tag 'perf-urgent-2021-03-21' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/intel: Fix unchecked MSR access error caused by VLBR_EVENT perf/x86/intel: Fix a crash caused by zero PEBS status
2021-03-21Merge tag 'x86_urgent_for_v5.12-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Borislav Petkov: "The freshest pile of shiny x86 fixes for 5.12: - Add the arch-specific mapping between physical and logical CPUs to fix devicetree-node lookups - Restore the IRQ2 ignore logic - Fix get_nr_restart_syscall() to return the correct restart syscall number. Split in a 4-patches set to avoid kABI breakage when backporting to dead kernels" * tag 'x86_urgent_for_v5.12-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/apic/of: Fix CPU devicetree-node lookups x86/ioapic: Ignore IRQ2 again x86: Introduce restart_block->arch_data to remove TS_COMPAT_RESTART x86: Introduce TS_COMPAT_RESTART to fix get_nr_restart_syscall() x86: Move TS_COMPAT back to asm/thread_info.h kernel, fs: Introduce and use set_restart_fn() and arch_set_restart_data()
2021-03-20Merge tag 'riscv-for-linus-5.12-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V fixes from Palmer Dabbelt: "A handful of fixes for 5.12: - fix the SBI remote fence numbers for hypervisor fences, which had been transcribed in the wrong order in Linux. These fences are only used with the KVM patches applied. - fix a whole host of build warnings, these should have no functional change. - fix init_resources() to prevent an off-by-one error from causing an out-of-bounds array reference. This was manifesting during boot on vexriscv. - ensure the KASAN mappings are visible before proceeding to use them" * tag 'riscv-for-linus-5.12-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: riscv: Correct SPARSEMEM configuration RISC-V: kasan: Declare kasan_shallow_populate() static riscv: Ensure page table writes are flushed when initializing KASAN vmalloc RISC-V: Fix out-of-bounds accesses in init_resources() riscv: Fix compilation error with Canaan SoC ftrace: Fix spelling mistake "disabed" -> "disabled" riscv: fix bugon.cocci warnings riscv: process: Fix no prototype for arch_dup_task_struct riscv: ftrace: Use ftrace_get_regs helper riscv: process: Fix no prototype for show_regs riscv: syscall_table: Reduce W=1 compilation warnings noise riscv: time: Fix no prototype for time_init riscv: ptrace: Fix no prototype warnings riscv: sbi: Fix comment of __sbi_set_timer_v01 riscv: irq: Fix no prototype warning riscv: traps: Fix no prototype warnings RISC-V: correct enum sbi_ext_rfence_fid
2021-03-19bpf: Use NOP_ATOMIC5 instead of emit_nops(&prog, 5) for BPF_TRAMP_F_CALL_ORIGStanislav Fomichev
__bpf_arch_text_poke does rewrite only for atomic nop5, emit_nops(xxx, 5) emits non-atomic one which breaks fentry/fexit with k8 atomics: P6_NOP5 == P6_NOP5_ATOMIC (0f1f440000 == 0f1f440000) K8_NOP5 != K8_NOP5_ATOMIC (6666906690 != 6666666690) Can be reproduced by doing "ideal_nops = k8_nops" in "arch_init_ideal_nops() and running fexit_bpf2bpf selftest. Fixes: e21aa341785c ("bpf: Fix fexit trampoline.") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210320000001.915366-1-sdf@google.com
2021-03-19x86/apic/of: Fix CPU devicetree-node lookupsJohan Hovold
Architectures that describe the CPU topology in devicetree and do not have an identity mapping between physical and logical CPU ids must override the default implementation of arch_match_cpu_phys_id(). Failing to do so breaks CPU devicetree-node lookups using of_get_cpu_node() and of_cpu_device_node_get() which several drivers rely on. It also causes the CPU struct devices exported through sysfs to point to the wrong devicetree nodes. On x86, CPUs are described in devicetree using their APIC ids and those do not generally coincide with the logical ids, even if CPU0 typically uses APIC id 0. Add the missing implementation of arch_match_cpu_phys_id() so that CPU-node lookups work also with SMP. Apart from fixing the broken sysfs devicetree-node links this likely does not affect current users of mainline kernels on x86. Fixes: 4e07db9c8db8 ("x86/devicetree: Use CPU description from Device Tree") Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210312092033.26317-1-johan@kernel.org
2021-03-19Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "Fixes for kvm on x86: - new selftests - fixes for migration with HyperV re-enlightenment enabled - fix RCU/SRCU usage - fixes for local_irq_restore misuse false positive" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: documentation/kvm: additional explanations on KVM_SET_BOOT_CPU_ID x86/kvm: Fix broken irq restoration in kvm_wait KVM: X86: Fix missing local pCPU when executing wbinvd on all dirty pCPUs KVM: x86: Protect userspace MSR filter with SRCU, and set atomically-ish selftests: kvm: add set_boot_cpu_id test selftests: kvm: add _vm_ioctl selftests: kvm: add get_msr_index_features selftests: kvm: Add basic Hyper-V clocksources tests KVM: x86: hyper-v: Don't touch TSC page values when guest opted for re-enlightenment KVM: x86: hyper-v: Track Hyper-V TSC page status KVM: x86: hyper-v: Prevent using not-yet-updated TSC page by secondary CPUs KVM: x86: hyper-v: Limit guest to writing zero to HV_X64_MSR_TSC_EMULATION_STATUS KVM: x86/mmu: Store the address space ID in the TDP iterator KVM: x86/mmu: Factor out tdp_iter_return_to_root KVM: x86/mmu: Fix RCU usage when atomically zapping SPTEs KVM: x86/mmu: Fix RCU usage in handle_removed_tdp_mmu_page
2021-03-19x86/sgx: Add a basic NUMA allocation scheme to sgx_alloc_epc_page()Jarkko Sakkinen
Background ========== SGX enclave memory is enumerated by the processor in contiguous physical ranges called Enclave Page Cache (EPC) sections. Currently, there is a free list per section, but allocations simply target the lowest-numbered sections. This is functional, but has no NUMA awareness. Fortunately, EPC sections are covered by entries in the ACPI SRAT table. These entries allow each EPC section to be associated with a NUMA node, just like normal RAM. Solution ======== Implement a NUMA-aware enclave page allocator. Mirror the buddy allocator and maintain a list of enclave pages for each NUMA node. Attempt to allocate enclave memory first from local nodes, then fall back to other nodes. Note that the fallback is not as sophisticated as the buddy allocator and is itself not aware of NUMA distances. When a node's free list is empty, it searches for the next-highest node with enclave pages (and will wrap if necessary). This could be improved in the future. Other ===== NUMA_KEEP_MEMINFO dependency is required for phys_to_target_node(). [ Kai Huang: Do not return NULL from __sgx_alloc_epc_page() because callers do not expect that and that leads to a NULL ptr deref. ] [ dhansen: Fix an uninitialized 'nid' variable in __sgx_alloc_epc_page() as Reported-by: kernel test robot <lkp@intel.com> to avoid any potential allocations from the wrong NUMA node or even premature allocation failures. ] Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/lkml/158188326978.894464.217282995221175417.stgit@dwillia2-desk3.amr.corp.intel.com/ Link: https://lkml.kernel.org/r/20210319040602.178558-1-kai.huang@intel.com Link: https://lkml.kernel.org/r/20210318214933.29341-1-dave.hansen@intel.com Link: https://lkml.kernel.org/r/20210317235332.362001-2-jarkko.sakkinen@intel.com
2021-03-19x86/sev-es: Optimize __sev_es_ist_enter() for better readabilityJoerg Roedel
Reorganize the code and improve the comments to make the function more readable and easier to understand. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210303141716.29223-4-joro@8bytes.org
2021-03-19x86/kaslr: Return boolean values from a function returning boolJiapeng Chong
Fix the following coccicheck warnings: ./arch/x86/boot/compressed/kaslr.c:642:10-11: WARNING: return of 0/1 in function 'process_mem_region' with return type bool. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/1615283963-67277-1-git-send-email-jiapeng.chong@linux.alibaba.com
2021-03-19x86/ioapic: Ignore IRQ2 againThomas Gleixner
Vitaly ran into an issue with hotplugging CPU0 on an Amazon instance where the matrix allocator claimed to be out of vectors. He analyzed it down to the point that IRQ2, the PIC cascade interrupt, which is supposed to be not ever routed to the IO/APIC ended up having an interrupt vector assigned which got moved during unplug of CPU0. The underlying issue is that IRQ2 for various reasons (see commit af174783b925 ("x86: I/O APIC: Never configure IRQ2" for details) is treated as a reserved system vector by the vector core code and is not accounted as a regular vector. The Amazon BIOS has an routing entry of pin2 to IRQ2 which causes the IO/APIC setup to claim that interrupt which is granted by the vector domain because there is no sanity check. As a consequence the allocation counter of CPU0 underflows which causes a subsequent unplug to fail with: [ ... ] CPU 0 has 4294967295 vectors, 589 available. Cannot disable CPU There is another sanity check missing in the matrix allocator, but the underlying root cause is that the IO/APIC code lost the IRQ2 ignore logic during the conversion to irqdomains. For almost 6 years nobody complained about this wreckage, which might indicate that this requirement could be lifted, but for any system which actually has a PIC IRQ2 is unusable by design so any routing entry has no effect and the interrupt cannot be connected to a device anyway. Due to that and due to history biased paranoia reasons restore the IRQ2 ignore logic and treat it as non existent despite a routing entry claiming otherwise. Fixes: d32932d02e18 ("x86/irq: Convert IOAPIC to use hierarchical irqdomain interfaces") Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210318192819.636943062@linutronix.de
2021-03-18x86/sev-es: Replace open-coded hlt-loops with sev_es_terminate()Joerg Roedel
There are a few places left in the SEV-ES C code where hlt loops and/or terminate requests are implemented. Replace them all with calls to sev_es_terminate(). Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210312123824.306-9-joro@8bytes.org
2021-03-18x86/boot/compressed/64: Check SEV encryption in the 32-bit boot-pathJoerg Roedel
Check whether the hypervisor reported the correct C-bit when running as an SEV guest. Using a wrong C-bit position could be used to leak sensitive data from the guest to the hypervisor. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210312123824.306-8-joro@8bytes.org
2021-03-18x86/boot/compressed/64: Add CPUID sanity check to 32-bit boot-pathJoerg Roedel
The 32-bit #VC handler has no GHCB and can only handle CPUID exit codes. It is needed by the early boot code to handle #VC exceptions raised in verify_cpu() and to get the position of the C-bit. But the CPUID information comes from the hypervisor which is untrusted and might return results which trick the guest into the no-SEV boot path with no C-bit set in the page-tables. All data written to memory would then be unencrypted and could leak sensitive data to the hypervisor. Add sanity checks to the 32-bit boot #VC handler to make sure the hypervisor does not pretend that SEV is not enabled. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210312123824.306-7-joro@8bytes.org
2021-03-18x86/boot/compressed/64: Add 32-bit boot #VC handlerJoerg Roedel
Add a #VC exception handler which is used when the kernel still executes in protected mode. This boot-path already uses CPUID, which will cause #VC exceptions in an SEV-ES guest. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210312123824.306-6-joro@8bytes.org
2021-03-18x86/kvm: Fix broken irq restoration in kvm_waitWanpeng Li
After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest splatting below during boot: raw_local_irq_restore() called with IRQs enabled WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x26/0x30 Modules linked in: hid_generic usbhid hid CPU: 1 PID: 169 Comm: systemd-udevd Not tainted 5.11.0+ #25 RIP: 0010:warn_bogus_irq_restore+0x26/0x30 Call Trace: kvm_wait+0x76/0x90 __pv_queued_spin_lock_slowpath+0x285/0x2e0 do_raw_spin_lock+0xc9/0xd0 _raw_spin_lock+0x59/0x70 lockref_get_not_dead+0xf/0x50 __legitimize_path+0x31/0x60 legitimize_root+0x37/0x50 try_to_unlazy_next+0x7f/0x1d0 lookup_fast+0xb0/0x170 path_openat+0x165/0x9b0 do_filp_open+0x99/0x110 do_sys_openat2+0x1f1/0x2e0 do_sys_open+0x5c/0x80 __x64_sys_open+0x21/0x30 do_syscall_64+0x32/0x50 entry_SYSCALL_64_after_hwframe+0x44/0xae The new consistency checking, expects local_irq_save() and local_irq_restore() to be paired and sanely nested, and therefore expects local_irq_restore() to be called with irqs disabled. The irqflags handling in kvm_wait() which ends up doing: local_irq_save(flags); safe_halt(); local_irq_restore(flags); instead triggers it. This patch fixes it by using local_irq_disable()/enable() directly. Cc: Thomas Gleixner <tglx@linutronix.de> Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Message-Id: <1615791328-2735-1-git-send-email-wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-18KVM: X86: Fix missing local pCPU when executing wbinvd on all dirty pCPUsWanpeng Li
In order to deal with noncoherent DMA, we should execute wbinvd on all dirty pCPUs when guest wbinvd exits to maintain data consistency. smp_call_function_many() does not execute the provided function on the local core, therefore replace it by on_each_cpu_mask(). Reported-by: Nadav Amit <namit@vmware.com> Cc: Nadav Amit <namit@vmware.com> Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Message-Id: <1615517151-7465-1-git-send-email-wanpengli@tencent.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-18KVM: x86: Protect userspace MSR filter with SRCU, and set atomically-ishSean Christopherson
Fix a plethora of issues with MSR filtering by installing the resulting filter as an atomic bundle instead of updating the live filter one range at a time. The KVM_X86_SET_MSR_FILTER ioctl() isn't truly atomic, as the hardware MSR bitmaps won't be updated until the next VM-Enter, but the relevant software struct is atomically updated, which is what KVM really needs. Similar to the approach used for modifying memslots, make arch.msr_filter a SRCU-protected pointer, do all the work configuring the new filter outside of kvm->lock, and then acquire kvm->lock only when the new filter has been vetted and created. That way vCPU readers either see the old filter or the new filter in their entirety, not some half-baked state. Yuan Yao pointed out a use-after-free in ksm_msr_allowed() due to a TOCTOU bug, but that's just the tip of the iceberg... - Nothing is __rcu annotated, making it nigh impossible to audit the code for correctness. - kvm_add_msr_filter() has an unpaired smp_wmb(). Violation of kernel coding style aside, the lack of a smb_rmb() anywhere casts all code into doubt. - kvm_clear_msr_filter() has a double free TOCTOU bug, as it grabs count before taking the lock. - kvm_clear_msr_filter() also has memory leak due to the same TOCTOU bug. The entire approach of updating the live filter is also flawed. While installing a new filter is inherently racy if vCPUs are running, fixing the above issues also makes it trivial to ensure certain behavior is deterministic, e.g. KVM can provide deterministic behavior for MSRs with identical settings in the old and new filters. An atomic update of the filter also prevents KVM from getting into a half-baked state, e.g. if installing a filter fails, the existing approach would leave the filter in a half-baked state, having already committed whatever bits of the filter were already processed. [*] https://lkml.kernel.org/r/20210312083157.25403-1-yaoyuan0329os@gmail.com Fixes: 1a155254ff93 ("KVM: x86: Introduce MSR filtering") Cc: stable@vger.kernel.org Cc: Alexander Graf <graf@amazon.com> Reported-by: Yuan Yao <yaoyuan0329os@gmail.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210316184436.2544875-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-18x86/boot/compressed/64: Setup IDT in startup_32 boot pathJoerg Roedel
This boot path needs exception handling when it is used with SEV-ES. Setup an IDT and provide a helper function to write IDT entries for use in 32-bit protected mode. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210312123824.306-5-joro@8bytes.org
2021-03-18x86/boot/compressed/64: Reload CS in startup_32Joerg Roedel
Exception handling in the startup_32 boot path requires the CS selector to be correctly set up. Reload it from the current GDT. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210312123824.306-4-joro@8bytes.org
2021-03-18x86/sev: Do not require Hypervisor CPUID bit for SEV guestsJoerg Roedel
A malicious hypervisor could disable the CPUID intercept for an SEV or SEV-ES guest and trick it into the no-SEV boot path, where it could potentially reveal secrets. This is not an issue for SEV-SNP guests, as the CPUID intercept can't be disabled for those. Remove the Hypervisor CPUID bit check from the SEV detection code to protect against this kind of attack and add a Hypervisor bit equals zero check to the SME detection path to prevent non-encrypted guests from trying to enable SME. This handles the following cases: 1) SEV(-ES) guest where CPUID intercept is disabled. The guest will still see leaf 0x8000001f and the SEV bit. It can retrieve the C-bit and boot normally. 2) Non-encrypted guests with intercepted CPUID will check the SEV_STATUS MSR and find it 0 and will try to enable SME. This will fail when the guest finds MSR_K8_SYSCFG to be zero, as it is emulated by KVM. But we can't rely on that, as there might be other hypervisors which return this MSR with bit 23 set. The Hypervisor bit check will prevent that the guest tries to enable SME in this case. 3) Non-encrypted guests on SEV capable hosts with CPUID intercept disabled (by a malicious hypervisor) will try to boot into the SME path. This will fail, but it is also not considered a problem because non-encrypted guests have no protection against the hypervisor anyway. [ bp: s/non-SEV/non-encrypted/g ] Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20210312123824.306-3-joro@8bytes.org
2021-03-18x86/boot/compressed/64: Cleanup exception handling before booting kernelJoerg Roedel
Disable the exception handling before booting the kernel to make sure any exceptions that happen during early kernel boot are not directed to the pre-decompression code. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210312123824.306-2-joro@8bytes.org
2021-03-18Merge tag 'v5.12-rc3' into x86/sevesBorislav Petkov
Pick up dependent SEV-ES urgent changes which went into -rc3 to base new work ontop. Signed-off-by: Borislav Petkov <bp@suse.de>
2021-03-18x86/sgx: Replace section->init_laundry_list with sgx_dirty_page_listJarkko Sakkinen
During normal runtime, the "ksgxd" daemon behaves like a version of kswapd just for SGX. But, before it starts acting like kswapd, its first job is to initialize enclave memory. Currently, the SGX boot code places each enclave page on a epc_section->init_laundry_list. Once it starts up, the ksgxd code walks over that list and populates the actual SGX page allocator. However, the per-section structures are going away to make way for the SGX NUMA allocator. There's also little need to have a per-section structure; the enclave pages are all treated identically, and they can be placed on the correct allocator list from metadata stored in the enclave page (struct sgx_epc_page) itself. Modify sgx_sanitize_section() to take a single page list instead of taking a section and deriving the list from there. Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lkml.kernel.org/r/20210317235332.362001-1-jarkko.sakkinen@intel.com
2021-03-18x86: Fix various typos in commentsIngo Molnar
Fix ~144 single-word typos in arch/x86/ code comments. Doing this in a single commit should reduce the churn. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: linux-kernel@vger.kernel.org
2021-03-18Merge tag 'v5.12-rc3' into x86/cleanups, to refresh the treeIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-03-18KVM: x86: hyper-v: Don't touch TSC page values when guest opted for ↵Vitaly Kuznetsov
re-enlightenment When guest opts for re-enlightenment notifications upon migration, it is in its right to assume that TSC page values never change (as they're only supposed to change upon migration and the host has to keep things as they are before it receives confirmation from the guest). This is mostly true until the guest is migrated somewhere. KVM userspace (e.g. QEMU) will trigger masterclock update by writing to HV_X64_MSR_REFERENCE_TSC, by calling KVM_SET_CLOCK,... and as TSC value and kvmclock reading drift apart (even slightly), the update causes TSC page values to change. The issue at hand is that when Hyper-V is migrated, it uses stale (cached) TSC page values to compute the difference between its own clocksource (provided by KVM) and its guests' TSC pages to program synthetic timers and in some cases, when TSC page is updated, this puts all stimer expirations in the past. This, in its turn, causes an interrupt storm and L2 guests not making much forward progress. Note, KVM doesn't fully implement re-enlightenment notification. Basically, the support for reenlightenment MSRs is just a stub and userspace is only expected to expose the feature when TSC scaling on the expected destination hosts is available. With TSC scaling, no real re-enlightenment is needed as TSC frequency doesn't change. With TSC scaling becoming ubiquitous, it likely makes little sense to fully implement re-enlightenment in KVM. Prevent TSC page from being updated after migration. In case it's not the guest who's initiating the change and when TSC page is already enabled, just keep it as it is: TSC value is supposed to be preserved across migration and TSC frequency can't change with re-enlightenment enabled. The guest is doomed anyway if any of this is not true. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210316143736.964151-5-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-18KVM: x86: hyper-v: Track Hyper-V TSC page statusVitaly Kuznetsov
Create an infrastructure for tracking Hyper-V TSC page status, i.e. if it was updated from guest/host side or if we've failed to set it up (because e.g. guest wrote some garbage to HV_X64_MSR_REFERENCE_TSC) and there's no need to retry. Also, in a hypothetical situation when we are in 'always catchup' mode for TSC we can now avoid contending 'hv->hv_lock' on every guest enter by setting the state to HV_TSC_PAGE_BROKEN after compute_tsc_page_parameters() returns false. Check for HV_TSC_PAGE_SET state instead of '!hv->tsc_ref.tsc_sequence' in get_time_ref_counter() to properly handle the situation when we failed to write the updated TSC page values to the guest. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210316143736.964151-4-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-18bpf: Fix fexit trampoline.Alexei Starovoitov
The fexit/fmod_ret programs can be attached to kernel functions that can sleep. The synchronize_rcu_tasks() will not wait for such tasks to complete. In such case the trampoline image will be freed and when the task wakes up the return IP will point to freed memory causing the crash. Solve this by adding percpu_ref_get/put for the duration of trampoline and separate trampoline vs its image life times. The "half page" optimization has to be removed, since first_half->second_half->first_half transition cannot be guaranteed to complete in deterministic time. Every trampoline update becomes a new image. The image with fmod_ret or fexit progs will be freed via percpu_ref_kill and call_rcu_tasks. Together they will wait for the original function and trampoline asm to complete. The trampoline is patched from nop to jmp to skip fexit progs. They are freed independently from the trampoline. The image with fentry progs only will be freed via call_rcu_tasks_trace+call_rcu_tasks which will wait for both sleepable and non-sleepable progs to complete. Fixes: fec56f5890d9 ("bpf: Introduce BPF trampoline") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Paul E. McKenney <paulmck@kernel.org> # for RCU Link: https://lore.kernel.org/bpf/20210316210007.38949-1-alexei.starovoitov@gmail.com
2021-03-17module: remove never implemented MODULE_SUPPORTED_DEVICELeon Romanovsky
MODULE_SUPPORTED_DEVICE was added in pre-git era and never was implemented. We can safely remove it, because the kernel has grown to have many more reliable mechanisms to determine if device is supported or not. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-03-17KVM: x86: hyper-v: Prevent using not-yet-updated TSC page by secondary CPUsVitaly Kuznetsov
When KVM_REQ_MASTERCLOCK_UPDATE request is issued (e.g. after migration) we need to make sure no vCPU sees stale values in PV clock structures and thus all vCPUs are kicked with KVM_REQ_CLOCK_UPDATE. Hyper-V TSC page clocksource is global and kvm_guest_time_update() only updates in on vCPU0 but this is not entirely correct: nothing blocks some other vCPU from entering the guest before we finish the update on CPU0 and it can read stale values from the page. Invalidate TSC page in kvm_gen_update_masterclock() to switch all vCPUs to using MSR based clocksource (HV_X64_MSR_TIME_REF_COUNT). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210316143736.964151-3-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-17KVM: x86: hyper-v: Limit guest to writing zero to ↵Vitaly Kuznetsov
HV_X64_MSR_TSC_EMULATION_STATUS HV_X64_MSR_TSC_EMULATION_STATUS indicates whether TSC accesses are emulated after migration (to accommodate for a different host TSC frequency when TSC scaling is not supported; we don't implement this in KVM). Guest can use the same MSR to stop TSC access emulation by writing zero. Writing anything else is forbidden. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210316143736.964151-2-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-16ftrace: Fix spelling mistake "disabed" -> "disabled"Colin Ian King
There is a spelling mistake in a comment, fix it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>