summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2021-03-10bpf, x86: Use kvmalloc_array instead kmalloc_array in bpf_jit_compYonghong Song
x86 bpf_jit_comp.c used kmalloc_array to store jited addresses for each bpf insn. With a large bpf program, we have see the following allocation failures in our production server: page allocation failure: order:5, mode:0x40cc0(GFP_KERNEL|__GFP_COMP), nodemask=(null),cpuset=/,mems_allowed=0" Call Trace: dump_stack+0x50/0x70 warn_alloc.cold.120+0x72/0xd2 ? __alloc_pages_direct_compact+0x157/0x160 __alloc_pages_slowpath+0xcdb/0xd00 ? get_page_from_freelist+0xe44/0x1600 ? vunmap_page_range+0x1ba/0x340 __alloc_pages_nodemask+0x2c9/0x320 kmalloc_order+0x18/0x80 kmalloc_order_trace+0x1d/0xa0 bpf_int_jit_compile+0x1e2/0x484 ? kmalloc_order_trace+0x1d/0xa0 bpf_prog_select_runtime+0xc3/0x150 bpf_prog_load+0x480/0x720 ? __mod_memcg_lruvec_state+0x21/0x100 __do_sys_bpf+0xc31/0x2040 ? close_pdeo+0x86/0xe0 do_syscall_64+0x42/0x110 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f2f300f7fa9 Code: Bad RIP value. Dumped assembly: ffffffff810b6d70 <bpf_int_jit_compile>: ; { ffffffff810b6d70: e8 eb a5 b4 00 callq 0xffffffff81c01360 <__fentry__> ffffffff810b6d75: 41 57 pushq %r15 ... ffffffff810b6f39: e9 72 fe ff ff jmp 0xffffffff810b6db0 <bpf_int_jit_compile+0x40> ; addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL); ffffffff810b6f3e: 8b 45 0c movl 12(%rbp), %eax ; return __kmalloc(bytes, flags); ffffffff810b6f41: be c0 0c 00 00 movl $3264, %esi ; addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL); ffffffff810b6f46: 8d 78 01 leal 1(%rax), %edi ; if (unlikely(check_mul_overflow(n, size, &bytes))) ffffffff810b6f49: 48 c1 e7 02 shlq $2, %rdi ; return __kmalloc(bytes, flags); ffffffff810b6f4d: e8 8e 0c 1d 00 callq 0xffffffff81287be0 <__kmalloc> ; if (!addrs) { ffffffff810b6f52: 48 85 c0 testq %rax, %rax Change kmalloc_array() to kvmalloc_array() to avoid potential allocation error for big bpf programs. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210309015647.3657852-1-yhs@fb.com
2021-03-10powerpc/64s/exception: Clean up a missed SRR specifierDaniel Axtens
Nick's patch cleaning up the SRR specifiers in exception-64s.S missed a single instance of EXC_HV_OR_STD. Clean that up. Caught by clang's integrated assembler. Fixes: 3f7fbd97d07d ("powerpc/64s/exception: Clean up SRR specifiers") Signed-off-by: Daniel Axtens <dja@axtens.net> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210225031006.1204774-2-dja@axtens.net
2021-03-10powerpc: Fix inverted SET_FULL_REGS bitopNicholas Piggin
This bit operation was inverted and set the low bit rather than cleared it, breaking the ability to ptrace non-volatile GPRs after exec. Fix. Only affects 64e and 32-bit. Fixes: feb9df3462e6 ("powerpc/64s: Always has full regs, so remove remnant checks") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210308085530.3191843-1-npiggin@gmail.com
2021-03-10powerpc/64s: Use symbolic macros for function entry encodingMichael Ellerman
In ppc_function_entry() we look for a specific set of instructions by masking the instructions and comparing with a known value. Currently those known values are just literal hex values, and we recently discovered one of them was wrong. Instead construct the values using the existing constants we have for defining various fields of instructions. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Link: https://lore.kernel.org/r/20210309071544.515303-1-mpe@ellerman.id.au
2021-03-09Merge tag 'mips-fixes_5.12_1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Thomas Bogendoerfer: - fixes for boot breakage because of misaligned FDTs - fix for overwritten exception handlers - enable MIPS optimized crypto for all MIPS CPUs to improve wireguard performance * tag 'mips-fixes_5.12_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MIPS: kernel: Reserve exception base early to prevent corruption MIPS: vmlinux.lds.S: align raw appended dtb to 8 bytes crypto: mips/poly1305 - enable for all MIPS processors MIPS: boot/compressed: Copy DTB to aligned address
2021-03-09x86/alternative: Drop unused feature parameter from ALTINSTR_REPLACEMENT()Juergen Gross
The macro ALTINSTR_REPLACEMENT() doesn't make use of the feature parameter, so drop it. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210309134813.23912-4-jgross@suse.com
2021-03-09KVM: arm64: Ensure I-cache isolation between vcpus of a same VMMarc Zyngier
It recently became apparent that the ARMv8 architecture has interesting rules regarding attributes being used when fetching instructions if the MMU is off at Stage-1. In this situation, the CPU is allowed to fetch from the PoC and allocate into the I-cache (unless the memory is mapped with the XN attribute at Stage-2). If we transpose this to vcpus sharing a single physical CPU, it is possible for a vcpu running with its MMU off to influence another vcpu running with its MMU on, as the latter is expected to fetch from the PoU (and self-patching code doesn't flush below that level). In order to solve this, reuse the vcpu-private TLB invalidation code to apply the same policy to the I-cache, nuking it every time the vcpu runs on a physical CPU that ran another vcpu of the same VM in the past. This involve renaming __kvm_tlb_flush_local_vmid() to __kvm_flush_cpu_context(), and inserting a local i-cache invalidation there. Cc: stable@vger.kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210303164505.68492-1-maz@kernel.org
2021-03-09arm64: kasan: fix page_alloc tagging with DEBUG_VIRTUALAndrey Konovalov
When CONFIG_DEBUG_VIRTUAL is enabled, the default page_to_virt() macro implementation from include/linux/mm.h is used. That definition doesn't account for KASAN tags, which leads to no tags on page_alloc allocations. Provide an arm64-specific definition for page_to_virt() when CONFIG_DEBUG_VIRTUAL is enabled that takes care of KASAN tags. Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc") Cc: <stable@vger.kernel.org> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/4b55b35202706223d3118230701c6a59749d9b72.1615219501.git.andreyknvl@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-03-09x86/sev-es: Use __copy_from_user_inatomic()Joerg Roedel
The #VC handler must run in atomic context and cannot sleep. This is a problem when it tries to fetch instruction bytes from user-space via copy_from_user(). Introduce a insn_fetch_from_user_inatomic() helper which uses __copy_from_user_inatomic() to safely copy the instruction bytes to kernel memory in the #VC handler. Fixes: 5e3427a7bc432 ("x86/sev-es: Handle instruction fetches from user-space") Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: stable@vger.kernel.org # v5.10+ Link: https://lkml.kernel.org/r/20210303141716.29223-6-joro@8bytes.org
2021-03-09x86/sev-es: Correctly track IRQ states in runtime #VC handlerJoerg Roedel
Call irqentry_nmi_enter()/irqentry_nmi_exit() in the #VC handler to correctly track the IRQ state during its execution. Fixes: 0786138c78e79 ("x86/sev-es: Add a Runtime #VC Exception Handler") Reported-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: stable@vger.kernel.org # v5.10+ Link: https://lkml.kernel.org/r/20210303141716.29223-5-joro@8bytes.org
2021-03-09x86/sev-es: Check regs->sp is trusted before adjusting #VC IST stackJoerg Roedel
The code in the NMI handler to adjust the #VC handler IST stack is needed in case an NMI hits when the #VC handler is still using its IST stack. But the check for this condition also needs to look if the regs->sp value is trusted, meaning it was not set by user-space. Extend the check to not use regs->sp when the NMI interrupted user-space code or the SYSCALL gap. Fixes: 315562c9af3d5 ("x86/sev-es: Adjust #VC IST Stack on entering NMI handler") Reported-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: stable@vger.kernel.org # 5.10+ Link: https://lkml.kernel.org/r/20210303141716.29223-3-joro@8bytes.org
2021-03-09powerpc/64s: Fix instruction encoding for lis in ppc_function_entry()Naveen N. Rao
'lis r2,N' is 'addis r2,0,N' and the instruction encoding in the macro LIS_R2 is incorrect (it currently maps to 'addis r0,r2,N'). Fix the same. Fixes: c71b7eff426f ("powerpc: Add ABIv2 support to ppc_function_entry") Cc: stable@vger.kernel.org # v3.16+ Reported-by: Jiri Olsa <jolsa@redhat.com> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210304020411.16796-1-naveen.n.rao@linux.vnet.ibm.com
2021-03-09MIPS: kernel: Reserve exception base early to prevent corruptionThomas Bogendoerfer
BMIPS is one of the few platforms that do change the exception base. After commit 2dcb39645441 ("memblock: do not start bottom-up allocations with kernel_end") we started seeing BMIPS boards fail to boot with the built-in FDT being corrupted. Before the cited commit, early allocations would be in the [kernel_end, RAM_END] range, but after commit they would be within [RAM_START + PAGE_SIZE, RAM_END]. The custom exception base handler that is installed by bmips_ebase_setup() done for BMIPS5000 CPUs ends-up trampling on the memory region allocated by unflatten_and_copy_device_tree() thus corrupting the FDT used by the kernel. To fix this, we need to perform an early reservation of the custom exception space. Additional we reserve the first 4k (1k for R3k) for either normal exception vector space (legacy CPUs) or special vectors like cache exceptions. Huge thanks to Serge for analysing and proposing a solution to this issue. Fixes: 2dcb39645441 ("memblock: do not start bottom-up allocations with kernel_end") Reported-by: Kamal Dasu <kdasu.kdev@gmail.com> Debugged-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Tested-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-03-09KVM: arm64: Don't use cbz/adr with external symbolsSami Tolvanen
allmodconfig + CONFIG_LTO_CLANG_THIN=y fails to build due to following linker errors: ld.lld: error: irqbypass.c:(function __guest_enter: .text+0x21CC): relocation R_AARCH64_CONDBR19 out of range: 2031220 is not in [-1048576, 1048575]; references hyp_panic >>> defined in vmlinux.o ld.lld: error: irqbypass.c:(function __guest_enter: .text+0x21E0): relocation R_AARCH64_ADR_PREL_LO21 out of range: 2031200 is not in [-1048576, 1048575]; references hyp_panic >>> defined in vmlinux.o This is because with LTO, the compiler ends up placing hyp_panic() more than 1MB away from __guest_enter(). Use an unconditional branch and adr_l instead to fix the issue. Link: https://github.com/ClangBuiltLinux/linux/issues/1317 Reported-by: Nathan Chancellor <nathan@kernel.org> Suggested-by: Marc Zyngier <maz@kernel.org> Suggested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Will Deacon <will@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210305202124.3768527-1-samitolvanen@google.com
2021-03-08Merge git://git.kernel.org:/pub/scm/linux/kernel/git/davem/sparcLinus Torvalds
Pull sparc updates from David Miller: "Just some more random bits from Al, including a conversion over to generic extables" * git://git.kernel.org:/pub/scm/linux/kernel/git/davem/sparc: sparc32: take ->thread.flags out sparc32: get rid of fake_swapper_regs sparc64: get rid of fake_swapper_regs sparc32: switch to generic extables sparc32: switch copy_user.S away from range exception table entries sparc32: get rid of range exception table entries in checksum_32.S sparc32: switch __bzero() away from range exception table entries sparc32: kill lookup_fault() sparc32: don't bother with lookup_fault() in __bzero()
2021-03-08xtensa: move coprocessor_flush to the .text sectionMax Filippov
coprocessor_flush is not a part of fast exception handlers, but it uses parts of fast coprocessor handling code that's why it's in the same source file. It uses call0 opcode to invoke those parts so there are no limitations on their relative location, but the rest of the code calls coprocessor_flush with call8 and that doesn't work when vectors are placed in a different gigabyte-aligned area than the rest of the kernel. Move coprocessor_flush from the .exception.text section to the .text so that it's reachable from the rest of the kernel with call8. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2021-03-08ARM: ep93xx: Select GENERIC_IRQ_MULTI_HANDLER directlyMarc Zyngier
ep93xx currently relies of CONFIG_ARM_VIC to select GENERIC_IRQ_MULTI_HANDLER. Given that this is logically a platform architecture property, add the selection of GENERIC_IRQ_MULTI_HANDLER at the platform level. Further patches will remove the selection from the irqchip side. Reported-by: Marc Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hector Martin <marcan@marcan.st> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-03-08x86/virtio: Have SEV guests enforce restricted virtio memory accessTom Lendacky
An SEV guest requires that virtio devices use the DMA API to allow the hypervisor to successfully access guest memory as needed. The VIRTIO_F_VERSION_1 and VIRTIO_F_ACCESS_PLATFORM features tell virtio to use the DMA API. Add arch_has_restricted_virtio_memory_access() for x86, to fail the device probe if these features have not been set for the device when running as an SEV guest. [ bp: Fix -Wmissing-prototypes warning Reported-by: kernel test robot <lkp@intel.com> ] Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/b46e0211f77ca1831f11132f969d470a6ffc9267.1614897610.git.thomas.lendacky@amd.com
2021-03-08arm64/mm: Reorganize pfn_valid()Anshuman Khandual
There are multiple instances of pfn_to_section_nr() and __pfn_to_section() when CONFIG_SPARSEMEM is enabled. This can be optimized if memory section is fetched earlier. This replaces the open coded PFN and ADDR conversion with PFN_PHYS() and PHYS_PFN() helpers. While there, also add a comment. This does not cause any functional change. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1614921898-4099-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-03-08arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memoryAnshuman Khandual
pfn_valid() validates a pfn but basically it checks for a valid struct page backing for that pfn. It should always return positive for memory ranges backed with struct page mapping. But currently pfn_valid() fails for all ZONE_DEVICE based memory types even though they have struct page mapping. pfn_valid() asserts that there is a memblock entry for a given pfn without MEMBLOCK_NOMAP flag being set. The problem with ZONE_DEVICE based memory is that they do not have memblock entries. Hence memblock_is_map_memory() will invariably fail via memblock_search() for a ZONE_DEVICE based address. This eventually fails pfn_valid() which is wrong. memblock_is_map_memory() needs to be skipped for such memory ranges. As ZONE_DEVICE memory gets hotplugged into the system via memremap_pages() called from a driver, their respective memory sections will not have SECTION_IS_EARLY set. Normal hotplug memory will never have MEMBLOCK_NOMAP set in their memblock regions. Because the flag MEMBLOCK_NOMAP was specifically designed and set for firmware reserved memory regions. memblock_is_map_memory() can just be skipped as its always going to be positive and that will be an optimization for the normal hotplug memory. Like ZONE_DEVICE based memory, all normal hotplugged memory too will not have SECTION_IS_EARLY set for their sections Skipping memblock_is_map_memory() for all non early memory sections would fix pfn_valid() problem for ZONE_DEVICE based memory and also improve its performance for normal hotplug memory as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Acked-by: David Hildenbrand <david@redhat.com> Fixes: 73b20c84d42d ("arm64: mm: implement pte_devmap support") Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1614921898-4099-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-03-08MIPS: vmlinux.lds.S: align raw appended dtb to 8 bytesBjørn Mork
The devicetree specification requires 8-byte alignment in memory. This is now enforced by libfdt since commit 79edff12060f ("scripts/dtc: Update to upstream version v1.6.0-51-g183df9e9c2b9") which included the upstream commit 5e735860c478 ("libfdt: Check for 8-byte address alignment in fdt_ro_probe_()"). This broke the MIPS raw appended DTBs which would be appended to the image immediately following the initramfs section. This ends with a 32bit size, resulting in a 4-byte alignment of the DTB. Fix by padding with zeroes to 8-bytes when MIPS_RAW_APPENDED_DTB is defined. Fixes: 79edff12060f ("scripts/dtc: Update to upstream version v1.6.0-51-g183df9e9c2b9") Cc: Rob Herring <robh+dt@kernel.org> Cc: Frank Rowand <frowand.list@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-03-08clocksource/drivers/hyper-v: Move handling of STIMER0 interruptsMichael Kelley
STIMER0 interrupts are most naturally modeled as per-cpu IRQs. But because x86/x64 doesn't have per-cpu IRQs, the core STIMER0 interrupt handling machinery is done in code under arch/x86 and Linux IRQs are not used. Adding support for ARM64 means adding equivalent code using per-cpu IRQs under arch/arm64. A better model is to treat per-cpu IRQs as the normal path (which it is for modern architectures), and the x86/x64 path as the exception. Do this by incorporating standard Linux per-cpu IRQ allocation into the main SITMER0 driver code, and bypass it in the x86/x64 exception case. For x86/x64, special case code is retained under arch/x86, but no STIMER0 interrupt handling code is needed under arch/arm64. No functional change. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/1614721102-2241-11-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-08clocksource/drivers/hyper-v: Handle sched_clock differences inlineMichael Kelley
While the Hyper-V Reference TSC code is architecture neutral, the pv_ops.time.sched_clock() function is implemented for x86/x64, but not for ARM64. Current code calls a utility function under arch/x86 (and coming, under arch/arm64) to handle the difference. Change this approach to handle the difference inline based on whether GENERIC_SCHED_CLOCK is present. The new approach removes code under arch/* since the difference is tied more to the specifics of the Linux implementation than to the architecture. No functional change. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/1614721102-2241-9-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-08clocksource/drivers/hyper-v: Handle vDSO differences inlineMichael Kelley
While the driver for the Hyper-V Reference TSC and STIMERs is architecture neutral, vDSO is implemented for x86/x64, but not for ARM64. Current code calls into utility functions under arch/x86 (and coming, under arch/arm64) to handle the difference. Change this approach to handle the difference inline based on whether VDSO_CLOCK_MODE_HVCLOCK is present. The new approach removes code under arch/* since the difference is tied more to the specifics of the Linux implementation than to the architecture. No functional change. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/1614721102-2241-8-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-08Drivers: hv: vmbus: Move handling of VMbus interruptsMichael Kelley
VMbus interrupts are most naturally modelled as per-cpu IRQs. But because x86/x64 doesn't have per-cpu IRQs, the core VMbus interrupt handling machinery is done in code under arch/x86 and Linux IRQs are not used. Adding support for ARM64 means adding equivalent code using per-cpu IRQs under arch/arm64. A better model is to treat per-cpu IRQs as the normal path (which it is for modern architectures), and the x86/x64 path as the exception. Do this by incorporating standard Linux per-cpu IRQ allocation into the main VMbus driver, and bypassing it in the x86/x64 exception case. For x86/x64, special case code is retained under arch/x86, but no VMbus interrupt handling code is needed under arch/arm64. No functional change. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/r/1614721102-2241-7-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-08Drivers: hv: vmbus: Handle auto EOI quirk inlineMichael Kelley
On x86/x64, Hyper-V provides a flag to indicate auto EOI functionality, but it doesn't on ARM64. Handle this quirk inline instead of calling into code under arch/x86 (and coming, under arch/arm64). No functional change. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/r/1614721102-2241-6-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-08Drivers: hv: vmbus: Move hyperv_report_panic_msg to arch neutral codeMichael Kelley
With the new Hyper-V MSR set function, hyperv_report_panic_msg() can be architecture neutral, so move it out from under arch/x86 and merge into hv_kmsg_dump(). This move also avoids needing a separate implementation under arch/arm64. No functional change. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/r/1614721102-2241-5-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-08Drivers: hv: Redo Hyper-V synthetic MSR get/set functionsMichael Kelley
Current code defines a separate get and set macro for each Hyper-V synthetic MSR used by the VMbus driver. Furthermore, the get macro can't be converted to a standard function because the second argument is modified in place, which is somewhat bad form. Redo this by providing a single get and a single set function that take a parameter specifying the MSR to be operated on. Fixup usage of the get function. Calling locations are no more complex than before, but the code under arch/x86 and the upcoming code under arch/arm64 is significantly simplified. Also standardize the names of Hyper-V synthetic MSRs that are architecture neutral. But keep the old x86-specific names as aliases that can be removed later when all references (particularly in KVM code) have been cleaned up in a separate patch series. No functional change. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/r/1614721102-2241-4-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-08x86/hyper-v: Move hv_message_type to architecture neutral moduleMichael Kelley
The definition of enum hv_message_type includes arch neutral and x86/x64-specific values. Ideally there would be a way to put the arch neutral values in an arch neutral module, and the arch specific values in an arch specific module. But C doesn't provide a way to extend enum types. As a compromise, move the entire definition into an arch neutral module, to avoid duplicating the arch neutral values for x86/x64 and for ARM64. No functional change. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/r/1614721102-2241-3-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-08Drivers: hv: vmbus: Move Hyper-V page allocator to arch neutral codeMichael Kelley
The Hyper-V page allocator functions are implemented in an architecture neutral way. Move them into the architecture neutral VMbus module so a separate implementation for ARM64 is not needed. No functional change. Signed-off-by: Michael Kelley <mikelley@microsoft.com> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/r/1614721102-2241-2-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2021-03-08x86/sev-es: Introduce ip_within_syscall_gap() helperJoerg Roedel
Introduce a helper to check whether an exception came from the syscall gap and use it in the SEV-ES code. Extend the check to also cover the compatibility SYSCALL entry path. Fixes: 315562c9af3d5 ("x86/sev-es: Adjust #VC IST Stack on entering NMI handler") Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: stable@vger.kernel.org # 5.10+ Link: https://lkml.kernel.org/r/20210303141716.29223-2-joro@8bytes.org
2021-03-08arm64/mm: Drop THP conditionality from FORCE_MAX_ZONEORDERAnshuman Khandual
Currently without THP being enabled, MAX_ORDER via FORCE_MAX_ZONEORDER gets reduced to 11, which falls below HUGETLB_PAGE_ORDER for certain 16K and 64K page size configurations. This is problematic which throws up the following warning during boot as pageblock_order via HUGETLB_PAGE_ORDER order exceeds MAX_ORDER. WARNING: CPU: 7 PID: 127 at mm/vmstat.c:1092 __fragmentation_index+0x58/0x70 Modules linked in: CPU: 7 PID: 127 Comm: kswapd0 Not tainted 5.12.0-rc1-00005-g0221e3101a1 #237 Hardware name: linux,dummy-virt (DT) pstate: 20400005 (nzCv daif +PAN -UAO -TCO BTYPE=--) pc : __fragmentation_index+0x58/0x70 lr : fragmentation_index+0x88/0xa8 sp : ffff800016ccfc00 x29: ffff800016ccfc00 x28: 0000000000000000 x27: ffff800011fd4000 x26: 0000000000000002 x25: ffff800016ccfda0 x24: 0000000000000002 x23: 0000000000000640 x22: ffff0005ffcb5b18 x21: 0000000000000002 x20: 000000000000000d x19: ffff0005ffcb3980 x18: 0000000000000004 x17: 0000000000000001 x16: 0000000000000019 x15: ffff800011ca7fb8 x14: 00000000000002b3 x13: 0000000000000000 x12: 00000000000005e0 x11: 0000000000000003 x10: 0000000000000080 x9 : ffff800011c93948 x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000007000 x5 : 0000000000007944 x4 : 0000000000000032 x3 : 000000000000001c x2 : 000000000000000b x1 : ffff800016ccfc10 x0 : 000000000000000d Call trace: __fragmentation_index+0x58/0x70 compaction_suitable+0x58/0x78 wakeup_kcompactd+0x8c/0xd8 balance_pgdat+0x570/0x5d0 kswapd+0x1e0/0x388 kthread+0x154/0x158 ret_from_fork+0x10/0x30 This solves the problem via keeping FORCE_MAX_ZONEORDER unchanged with or without THP on 16K and 64K page size configurations, making sure that the HUGETLB_PAGE_ORDER (and pageblock_order) would never exceed MAX_ORDER. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1614597914-28565-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-03-08arm64/mm: Drop redundant ARCH_WANT_HUGE_PMD_SHAREAnshuman Khandual
There is already an ARCH_WANT_HUGE_PMD_SHARE which is being selected for applicable configurations. Hence just drop the other redundant entry. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1614575192-21307-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-03-08arm64: Drop support for CMDLINE_EXTENDWill Deacon
The documented behaviour for CMDLINE_EXTEND is that the arguments from the bootloader are appended to the built-in kernel command line. This also matches the option parsing behaviour for the EFI stub and early ID register overrides. Bizarrely, the fdt behaviour is the other way around: appending the built-in command line to the bootloader arguments, resulting in a command-line that doesn't necessarily line-up with the parsing order and definitely doesn't line-up with the documented behaviour. As it turns out, there is a proposal [1] to replace CMDLINE_EXTEND with CMDLINE_PREPEND and CMDLINE_APPEND options which should hopefully make the intended behaviour much clearer. While we wait for those to land, drop CMDLINE_EXTEND for now as there appears to be little enthusiasm for changing the current FDT behaviour. [1] https://lore.kernel.org/lkml/20190319232448.45964-2-danielwa@cisco.com/ Cc: Max Uvarov <muvarov@gmail.com> Cc: Rob Herring <robh@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Doug Anderson <dianders@chromium.org> Cc: Tyler Hicks <tyhicks@linux.microsoft.com> Cc: Frank Rowand <frowand.list@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/CAL_JsqJX=TCCs7=gg486r9TN4NYscMTCLNfqJF9crskKPq-bTg@mail.gmail.com Link: https://lore.kernel.org/r/20210303134927.18975-3-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-03-08arm64: cpufeatures: Fix handling of CONFIG_CMDLINE for idreg overridesWill Deacon
The built-in kernel commandline (CONFIG_CMDLINE) can be configured in three different ways: 1. CMDLINE_FORCE: Use CONFIG_CMDLINE instead of any bootloader args 2. CMDLINE_EXTEND: Append the bootloader args to CONFIG_CMDLINE 3. CMDLINE_FROM_BOOTLOADER: Only use CONFIG_CMDLINE if there aren't any bootloader args. The early cmdline parsing to detect idreg overrides gets (2) and (3) slightly wrong: in the case of (2) the bootloader args are parsed first and in the case of (3) the CMDLINE is always parsed. Fix these issues by moving the bootargs parsing out into a helper function and following the same logic as that used by the EFI stub. Reviewed-by: Marc Zyngier <maz@kernel.org> Fixes: 33200303553d ("arm64: cpufeature: Add an early command-line cpufeature override facility") Link: https://lore.kernel.org/r/20210303134927.18975-2-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-03-08x86/platform/uv: Set section block size for hubless architecturesMike Travis
Commit bbbd2b51a2aa ("x86/platform/UV: Use new set memory block size function") added a call to set the block size value that is needed by the kernel to set the boundaries in the section list. This was done for UV Hubbed systems but missed in the UV Hubless setup. Fix that mistake by adding that same set call for hubless systems, which support the same NVRAMs and Intel BIOS, thus the same problem occurs. [ bp: Massage commit message. ] Fixes: bbbd2b51a2aa ("x86/platform/UV: Use new set memory block size function") Signed-off-by: Mike Travis <mike.travis@hpe.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Steve Wahl <steve.wahl@hpe.com> Reviewed-by: Russ Anderson <rja@hpe.com> Link: https://lkml.kernel.org/r/20210305162853.299892-1-mike.travis@hpe.com
2021-03-08crypto: mips/poly1305 - enable for all MIPS processorsMaciej W. Rozycki
The MIPS Poly1305 implementation is generic MIPS code written such as to support down to the original MIPS I and MIPS III ISA for the 32-bit and 64-bit variant respectively. Lift the current limitation then to enable code for MIPSr1 ISA or newer processors only and have it available for all MIPS processors. Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Fixes: a11d055e7a64 ("crypto: mips/poly1305 - incorporate OpenSSL/CRYPTOGAMS optimized implementation") Cc: stable@vger.kernel.org # v5.5+ Acked-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-03-08MIPS: boot/compressed: Copy DTB to aligned addressPaul Cercueil
Since 5.12-rc1, the Device Tree blob must now be properly aligned. Therefore, the decompress routine must be careful to copy the blob at the next aligned address after the kernel image. This commit fixes the kernel sometimes not booting with a Device Tree blob appended to it. Fixes: 79edff12060f ("scripts/dtc: Update to upstream version v1.6.0-51-g183df9e9c2b9") Signed-off-by: Paul Cercueil <paul@crapouillou.net> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-03-08x86: Remove duplicate TSC DEADLINE MSR definitionsDave Hansen
There are two definitions for the TSC deadline MSR in msr-index.h, one with an underscore and one without. Axe one of them and move all the references over to the other one. [ bp: Fixup the MSR define in handle_fastpath_set_msr_irqoff() too. ] Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200305174706.0D6B8EE4@viggo.jf.intel.com
2021-03-08s390: remove IBM_PARTITION and CONFIGFS_FS from zfcpdump defconfigAlexander Egorenkov
Remove by zfcpdump unused CONFIG_IBM_PARTITION and CONFIG_CONFIGFS_FS. Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Reviewed-by: Steffen Maier <maier@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-03-08s390: update defconfigsHeiko Carstens
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-03-08s390/cpumf: remove unneeded semicolonJiapeng Chong
Fix the following coccicheck warnings: ./arch/s390/kernel/perf_cpum_cf.c:272:2-3: Unneeded semicolon. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Link: https://lore.kernel.org/r/1614233736-87331-1-git-send-email-jiapeng.chong@linux.alibaba.com Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-03-08s390/cpumf: rename header file to hwctrset.hThomas Richter
Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Suggested-by: Hendrick Brueckner <brueckner@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-03-08s390/cpumf: remove 60 seconds read limitThomas Richter
Remove the 60 seconds read interval limit. Do not impose any limit at all and allow read of complete counter sets. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-03-08s390/topology: remove always false if checkHeiko Carstens
The cpumask being checked in cpu_group_map() must have at least one cpu set; therefore remove the check. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-03-08s390/time,idle: get rid of unsigned long longHeiko Carstens
Get rid of unsigned long long, and use unsigned long instead everywhere. The usage of unsigned long long is a leftover from 31 bit kernel support. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2021-03-08Merge branch 'fixes-rc2' into fixesTony Lindgren
2021-03-08ARM: dts: Fix moving mmc devices with aliases for omap4 & 5Tony Lindgren
Fix moving mmc devices with dts aliases as discussed on the lists. Without this we now have internal eMMC mmc1 show up as mmc2 compared to the earlier order of devices. Signed-off-by: Tony Lindgren <tony@atomide.com>
2021-03-08ARM: dts: Drop duplicate sha2md5_fck to fix clk_disable raceTony Lindgren
We have a duplicate legacy clock defined for sha2md5_fck that can sometimes race with clk_disable() with the dts configured clock for OMAP4_SHA2MD5_CLKCTRL when unused clocks are disabled during boot causing an "Unhandled fault: imprecise external abort". Signed-off-by: Tony Lindgren <tony@atomide.com>
2021-03-07Merge branch 'parisc-5.12-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux Pull parisc fixes from Helge Deller: "Two small parisc architecture fixes: fix a linking failure reported by the kernel test robot and remove a duplicate include" * 'parisc-5.12-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: arch/parisc/kernel: remove duplicate include in ptrace parisc: Enable -mlong-calls gcc option with CONFIG_COMPILE_TEST