summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2020-05-26Merge branches 'fixes' and 'misc' into for-nextRussell King
2020-05-26ARM: 8980/1: Allow either FLATMEM or SPARSEMEM on the multiplatform buildGregory Fong
ARMv7 chips with LPAE can often benefit from SPARSEMEM, as portions of system memory can be located deep in the 36-bit address space. Allow FLATMEM or SPARSEMEM to be selectable at compile time; FLATMEM remains the default. This is based on Kevin's "[PATCH 3/3] ARM: Allow either FLATMEM or SPARSEMEM on the multi-v7 build" from [1] and shamelessly rips off his commit message text above. As Arnd pointed out at [2] there doesn't seem to be any reason to tie this specifically to ARMv7, so this has been changed to apply to all multiplatform kernels. The addition of this option does not change the defaults and a build with any defconfig will behave the same way as previously. The only effect this change has is to enable user to change "Memory model" selection in interactive kernel configuration (menuconfig, xconfig etc). [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-September/286837.html [2] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-October/298950.html [ rppt: added ARCH_SELECT_MEMORY_MODEL and updated the changelog ] Cc: Kevin Cernekee <cernekee@gmail.com> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Gregory Fong <gregory.0xf0@gmail.com> Signed-off-by: Doug Berger <opendmb@gmail.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Mike Rapoport <mike.rapoport@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-26ARM: 8979/1: Remove redundant ARCH_SPARSEMEM_DEFAULT settingKevin Cernekee
If ARCH_SPARSEMEM_ENABLE=y and ARCH_{FLATMEM,DISCONTIGMEM}_ENABLE=n, then the logic in mm/Kconfig already makes CONFIG_SPARSEMEM the only choice. This is true for all of the existing ARM users of ARCH_SPARSEMEM_ENABLE. Forcing ARCH_SPARSEMEM_DEFAULT=y if ARCH_SPARSEMEM_ENABLE=y prevents us from ever defaulting to FLATMEM, so we should remove this setting. Link: https://lkml.org/lkml/2015/6/4/757 Signed-off-by: Kevin Cernekee <cernekee@gmail.com> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Gregory Fong <gregory.0xf0@gmail.com> Signed-off-by: Doug Berger <opendmb@gmail.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Mike Rapoport <mike.rapoport@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-26ARM: 8978/1: mm: make act_mm() respect THREAD_SIZELinus Walleij
Recent work with KASan exposed the folling hard-coded bitmask in arch/arm/mm/proc-macros.S: bic rd, sp, #8128 bic rd, rd, #63 This forms the bitmask 0x1FFF that is coinciding with (PAGE_SIZE << THREAD_SIZE_ORDER) - 1, this code was assuming that THREAD_SIZE is always 8K (8192). As KASan was increasing THREAD_SIZE_ORDER to 2, I ran into this bug. Fix it by this little oneline suggested by Ard: bic rd, sp, #(THREAD_SIZE - 1) & ~63 Where THREAD_SIZE is defined using THREAD_SIZE_ORDER. We have to also include <linux/const.h> since the THREAD_SIZE expands to use the _AC() macro. Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Suggested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-26Merge tag 'efi-arm-no-relocate-for-rmk' of ↵Russell King
git://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux into misc Simplify EFI handover to decompressor The EFI stub in the ARM kernel runs in the context of the firmware, which means it usually runs with the caches and MMU on. Currently, we relocate the zImage so it appears in the first 128 MiB, disable the MMU and caches and invoke the decompressor via its ordinary entry point. However, since we can pass the base of DRAM directly, there is no need to relocate the zImage, which also means there is no need to disable and re-enable the caches and create new page tables etc. This also allows systems whose DRAM start address is not a round multiple of 128 MB to decompress the kernel proper to the base of memory, ensuring that all memory is usable at runtime.
2020-05-19ARM: decompressor: run decompressor in place if loaded via UEFIArd Biesheuvel
The decompressor can load from anywhere in memory, and the only reason the EFI stub code relocates it is to ensure it appears within the first 128 MiB of memory, so that the uncompressed kernel ends up at the right offset in memory. We can short circuit this, and simply jump into the decompressor startup code at the point where it knows where the base of memory lives. This also means there is no need to disable the MMU and caches, create new page tables and re-enable them. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
2020-05-19ARM: decompressor: move GOT into .data for EFI enabled buildsArd Biesheuvel
We will be running the decompressor in place after a future patch, instead of copying it around first. This means we no longer have to disable and re-enable the MMU and caches either. However, this means we will be loaded with the restricted permissions set by the UEFI firmware, which means that we have to move the GOT table into the data section in order for the contents to be writable by the code itself. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
2020-05-19ARM: decompressor: defer loading of the contents of the LC0 structureArd Biesheuvel
The remaining contents of LC0 are only used after the point in the decompressor startup code where we enter via 'wont_overwrite'. So move the loading of the LC0 structure after it. This will allow us to jump to wont_overwrite directly from the EFI stub, and execute the decompressor in place at the offset it was loaded by the UEFI firmware. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
2020-05-19ARM: decompressor: split off _edata and stack base into separate objectArd Biesheuvel
In preparation of moving the handling of the LC0 object to a later stage in the decompressor startup code, move out _edata and the initial value of the stack pointer, which are needed earlier than the remaining contents of LC0. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
2020-05-19ARM: decompressor: move headroom variable out of LC0Ard Biesheuvel
Before breaking up LC0 into different pieces, move out the variable that is already place-relative (given that it subtracts 'restart' in the expression) and so its value does not need to be added to the runtime address of the LC0 symbol itself. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
2020-05-19ARM: 8975/1: module: fix handling of unwind init sectionsVincent Whitchurch
Unwind information for init sections is placed in .ARM.exidx.init.text and .ARM.extab.init.text. The module core doesn't know that these are init sections so they are allocated along with the core sections, and if the core and init sections get allocated in different memory regions (which is possible with CONFIG_ARM_MODULE_PLTS=y) and they can't reach each other, relocation fails: final section addresses: ... 0x7f800000 .init.text .. 0xcbb54078 .ARM.exidx.init.text .. section 16 reloc 0 sym '': relocation 42 out of range (0xcbb54078 -> 0x7f800000) Fix this by informing the module core that these sections are init sections, and by removing the init unwind tables before the module core frees the init sections. Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-19ARM: 8977/1: ptrace: Fix mask for thumb breakpoint hookFredrik Strupe
call_undef_hook() in traps.c applies the same instr_mask for both 16-bit and 32-bit thumb instructions. If instr_mask then is only 16 bits wide (0xffff as opposed to 0xffffffff), the first half-word of 32-bit thumb instructions will be masked out. This makes the function match 32-bit thumb instructions where the second half-word is equal to instr_val, regardless of the first half-word. The result in this case is that all undefined 32-bit thumb instructions with the second half-word equal to 0xde01 (udf #1) work as breakpoints and will raise a SIGTRAP instead of a SIGILL, instead of just the one intended 16-bit instruction. An example of such an instruction is 0xeaa0de01, which is unallocated according to Arm ARM and should raise a SIGILL, but instead raises a SIGTRAP. This patch fixes the issue by setting all the bits in instr_mask, which will still match the intended 16-bit thumb instruction (where the upper half is always 0), but not any 32-bit thumb instructions. Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Fredrik Strupe <fredrik@strupe.net> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-19ARM: 8974/1: use SPARSMEM_STATIC when SPARSEMEM is enabledMike Rapoport
The commit 3e347261a80b5 ("[PATCH] sparsemem extreme implementation") made SPARSMEM_EXTREME the default option for configurations that enable SPARSEMEM. For ARM systems with handful of memory banks SPARSEMEM_EXTREME is an overkill. Ensure that SPARSMEM_STATIC is enabled in the configurations that use SPARSEMEM. Fixes: 3e347261a80b5 ("[PATCH] sparsemem extreme implementation") Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-07Merge branch 'uaccess' into fixesRussell King
2020-05-07ARM: 8973/1: Add missing newline terminator to kernel messageGeert Uytterhoeven
Before commit 874f9c7da9a4acbc ("printk: create pr_<level> functions"), pr_*() calls without a trailing newline characters would be printed with a newline character appended, both on the console and in the output of the dmesg command. After that commit, no new line character is appended, and the output of the next pr_*() call of the same type may be appended: -No ATAGs? -hw-breakpoint: found 5 (+1 reserved) breakpoint and 4 watchpoint registers. +No ATAGs?hw-breakpoint: found 5 (+1 reserved) breakpoint and 4 watchpoint registers. While this commit has been reverted in commit a0cba2179ea4c182 ("Revert "printk: create pr_<level> functions""), it's still good practice to terminate kernel messages with newlines. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-03ARM: uaccess: fix DACR mismatch with nested exceptionsuaccessRussell King
Tomas Paukrt reports that his SAM9X60 based system (ARM926, ARMv5TJ) fails to fix up alignment faults, eventually resulting in a kernel oops. The problem occurs when using CONFIG_CPU_USE_DOMAINS with commit e6978e4bf181 ("ARM: save and reset the address limit when entering an exception"). This is because the address limit is set back to TASK_SIZE on exception entry, and, although it is restored on exception exit, the domain register is not. Hence, this sequence can occur: interrupt pt_regs->addr_limit = addr_limit // USER_DS addr_limit = USER_DS alignment exception __probe_kernel_read() old_fs = get_fs() // USER_DS set_fs(KERNEL_DS) addr_limit = KERNEL_DS dacr.kernel = DOMAIN_MANAGER interrupt pt_regs->addr_limit = addr_limit // KERNEL_DS addr_limit = USER_DS alignment exception __probe_kernel_read() old_fs = get_fs() // USER_DS set_fs(KERNEL_DS) addr_limit = KERNEL_DS dacr.kernel = DOMAIN_MANAGER ... set_fs(old_fs) addr_limit = USER_DS dacr.kernel = DOMAIN_CLIENT ... addr_limit = pt_regs->addr_limit // KERNEL_DS interrupt returns At this point, addr_limit is correctly restored to KERNEL_DS for __probe_kernel_read() to continue execution, but dacr.kernel is not, it has been reset by the set_fs(old_fs) to DOMAIN_CLIENT. This would not have happened prior to the mentioned commit, because addr_limit would remain KERNEL_DS, so get_fs() would have returned KERNEL_DS, and so would correctly nest. This commit fixes the problem by also saving the DACR on exception entry if either CONFIG_CPU_SW_DOMAIN_PAN or CONFIG_CPU_USE_DOMAINS are enabled, and resetting the DACR appropriately on exception entry to match addr_limit and PAN settings. Fixes: e6978e4bf181 ("ARM: save and reset the address limit when entering an exception") Reported-by: Tomas Paukrt <tomas.paukrt@advantech.cz> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-03ARM: uaccess: integrate uaccess_save and uaccess_restoreRussell King
Integrate uaccess_save / uaccess_restore macros into the new uaccess_entry / uaccess_exit macros respectively. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-03ARM: uaccess: consolidate uaccess asm to asm/uaccess-asm.hRussell King
Consolidate the user access assembly code to asm/uaccess-asm.h. This moves the csdb, check_uaccess, uaccess_mask_range_ptr, uaccess_enable, uaccess_disable, uaccess_save, uaccess_restore macros, and creates two new ones for exception entry and exit - uaccess_entry and uaccess_exit. This makes the uaccess_save and uaccess_restore macros private to asm/uaccess-asm.h. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-04-29ARM: 8970/1: decompressor: increase tag sizeŁukasz Stelmach
The size field of the tag header structure is supposed to be set to the size of a tag structure including the header. Fixes: c772568788b5f0 ("ARM: add additional table to compressed kernel") Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-04-29ARM: 8971/1: replace the sole use of a symbol with its definitionJian Cai
ALT_UP_B macro sets symbol up_b_offset via .equ to an expression involving another symbol. The macro gets expanded twice when arch/arm/kernel/sleep.S is assembled, creating a scenario where up_b_offset is set to another expression involving symbols while its current value is based on symbols. LLVM integrated assembler does not allow such cases, and based on the documentation of binutils, "Values that are based on expressions involving other symbols are allowed, but some targets may restrict this to only being done once per assembly", so it may be better to avoid such cases as it is not clearly stated which targets should support or disallow them. The fix in this case is simple, as up_b_offset has only one use, so we can replace the use with the definition and get rid of up_b_offset. Link:https://github.com/ClangBuiltLinux/linux/issues/920 Reviewed-by: Stefan Agner <stefan@agner.ch> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Jian Cai <caij2003@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-04-29ARM: 8969/1: decompressor: simplify libfdt buildsMasahiro Yamada
Copying source files during the build time may not end up with as clean code as expected. lib/fdt*.c simply wrap scripts/dtc/libfdt/fdt*.c, and it works nicely. Let's follow this approach for the arm decompressor, too. Add four wrappers, arch/arm/boot/compressed/fdt*.c and remove the Makefile messes. Another nice thing is we no longer need to maintain the own libfdt_env.h because the decompressor can include <linux/libfdt_env.h>. There is a subtle problem when generated files are turned into check-in files. When you are doing a rebuild of an existing object tree with O= option, there exists stale "shipped" copies that the old Makefile implementation created. The build system ends up with compiling the stale generated files because Make searches for prerequisites in the current directory, i.e. $(objtree) first, and then the directory listed in VPATH, i.e. $(srctree). To mend this issue, I added the following code: ifdef building_out_of_srctree $(shell rm -f $(addprefix $(obj)/, fdt_rw.c fdt_ro.c fdt_wip.c fdt.c)) endif This will need to stay for a while because "git bisect" crossing this commit, otherwise, would result in a build error. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-04-21ARM: compat: remove KERNEL_DS usage in sys_oabi_epoll_ctl()Russell King
We no longer need to switch to KERNEL_DS mode in sys_oabi_epoll_ctl() as we can use do_epoll_ctl() to avoid the additional copy. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-04-12Merge tag 'x86-urgent-2020-04-12' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "A set of three patches to fix the fallout of the newly added split lock detection feature. It addressed the case where a KVM guest triggers a split lock #AC and KVM reinjects it into the guest which is not prepared to handle it. Add proper sanity checks which prevent the unconditional injection into the guest and handles the #AC on the host side in the same way as user space detections are handled. Depending on the detection mode it either warns and disables detection for the task or kills the task if the mode is set to fatal" * tag 'x86-urgent-2020-04-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: KVM: VMX: Extend VMXs #AC interceptor to handle split lock #AC in guest KVM: x86: Emulate split-lock access as a write in emulator x86/split_lock: Provide handle_guest_split_lock()
2020-04-12Merge tag 'perf-urgent-2020-04-12' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Thomas Gleixner: "Three fixes/updates for perf: - Fix the perf event cgroup tracking which tries to track the cgroup even for disabled events. - Add Ice Lake server support for uncore events - Disable pagefaults when retrieving the physical address in the sampling code" * tag 'perf-urgent-2020-04-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/core: Disable page faults when getting phys address perf/x86/intel/uncore: Add Ice Lake server uncore support perf/cgroup: Correct indirection in perf_less_group_idx() perf/core: Fix event cgroup tracking
2020-04-11Merge tag 'nios2-v5.7-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/lftan/nios2 Pull nios2 updates from Ley Foon Tan: - Remove nios2-dev@lists.rocketboards.org from MAINTAINERS - remove 'resetvalue' property - rename 'altr,gpio-bank-width' -> 'altr,ngpio' - enable the common clk subsystem on Nios2 * tag 'nios2-v5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/lftan/nios2: MAINTAINERS: Remove nios2-dev@lists.rocketboards.org arch: nios2: remove 'resetvalue' property arch: nios2: rename 'altr,gpio-bank-width' -> 'altr,ngpio' arch: nios2: Enable the common clk subsystem on Nios2
2020-04-11Merge tag 'kbuild-v5.7-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull more Kbuild updates from Masahiro Yamada: - raise minimum supported binutils version to 2.23 - remove old CONFIG_AS_* macros that we know binutils >= 2.23 supports - move remaining CONFIG_AS_* tests to Kconfig from Makefile - enable -Wtautological-compare warnings to catch more issues - do not support GCC plugins for GCC <= 4.7 - fix various breakages of 'make xconfig' - include the linker version used for linking the kernel into LINUX_COMPILER, which is used for the banner, and also exposed to /proc/version - link lib-y objects to vmlinux forcibly when CONFIG_MODULES=y, which allows us to remove the lib-ksyms.o workaround, and to solve the last known issue of the LLVM linker - add dummy tools in scripts/dummy-tools/ to enable all compiler tests in Kconfig, which will be useful for distro maintainers - support the single switch, LLVM=1 to use Clang and all LLVM utilities instead of GCC and Binutils. - support LLVM_IAS=1 to enable the integrated assembler, which is still experimental * tag 'kbuild-v5.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (36 commits) kbuild: fix comment about missing include guard detection kbuild: support LLVM=1 to switch the default tools to Clang/LLVM kbuild: replace AS=clang with LLVM_IAS=1 kbuild: add dummy toolchains to enable all cc-option etc. in Kconfig kbuild: link lib-y objects to vmlinux forcibly when CONFIG_MODULES=y MIPS: fw: arc: add __weak to prom_meminit and prom_free_prom_memory kbuild: remove -I$(srctree)/tools/include from scripts/Makefile kbuild: do not pass $(KBUILD_CFLAGS) to scripts/mkcompile_h Documentation/llvm: fix the name of llvm-size kbuild: mkcompile_h: Include $LD version in /proc/version kconfig: qconf: Fix a few alignment issues kconfig: qconf: remove some old bogus TODOs kconfig: qconf: fix support for the split view mode kconfig: qconf: fix the content of the main widget kconfig: qconf: Change title for the item window kconfig: qconf: clean deprecated warnings gcc-plugins: drop support for GCC <= 4.7 kbuild: Enable -Wtautological-compare x86: update AS_* macros to binutils >=2.23, supporting ADX and AVX2 crypto: x86 - clean up poly1305-x86_64-cryptogams.S by 'make clean' ...
2020-04-11KVM: VMX: Extend VMXs #AC interceptor to handle split lock #AC in guestXiaoyao Li
Two types of #AC can be generated in Intel CPUs: 1. legacy alignment check #AC 2. split lock #AC Reflect #AC back into the guest if the guest has legacy alignment checks enabled or if split lock detection is disabled. If the #AC is not a legacy one and split lock detection is enabled, then invoke handle_guest_split_lock() which will either warn and disable split lock detection for this task or force SIGBUS on it. [ tglx: Switch it to handle_guest_split_lock() and rename the misnamed helper function. ] Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lkml.kernel.org/r/20200410115517.176308876@linutronix.de
2020-04-11KVM: x86: Emulate split-lock access as a write in emulatorXiaoyao Li
Emulate split-lock accesses as writes if split lock detection is on to avoid #AC during emulation, which will result in a panic(). This should never occur for a well-behaved guest, but a malicious guest can manipulate the TLB to trigger emulation of a locked instruction[1]. More discussion can be found at [2][3]. [1] https://lkml.kernel.org/r/8c5b11c9-58df-38e7-a514-dc12d687b198@redhat.com [2] https://lkml.kernel.org/r/20200131200134.GD18946@linux.intel.com [3] https://lkml.kernel.org/r/20200227001117.GX9940@linux.intel.com Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lkml.kernel.org/r/20200410115517.084300242@linutronix.de
2020-04-11x86/split_lock: Provide handle_guest_split_lock()Thomas Gleixner
Without at least minimal handling for split lock detection induced #AC, VMX will just run into the same problem as the VMWare hypervisor, which was reported by Kenneth. It will inject the #AC blindly into the guest whether the guest is prepared or not. Provide a function for guest mode which acts depending on the host SLD mode. If mode == sld_warn, treat it like user space, i.e. emit a warning, disable SLD and mark the task accordingly. Otherwise force SIGBUS. [ bp: Add a !CPU_SUP_INTEL stub for handle_guest_split_lock(). ] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lkml.kernel.org/r/20200410115516.978037132@linutronix.de Link: https://lkml.kernel.org/r/20200402123258.895628824@linutronix.de
2020-04-10Merge branch 'akpm' (patches from Andrew)Linus Torvalds
Merge yet more updates from Andrew Morton: - Almost all of the rest of MM (memcg, slab-generic, slab, pagealloc, gup, hugetlb, pagemap, memremap) - Various other things (hfs, ocfs2, kmod, misc, seqfile) * akpm: (34 commits) ipc/util.c: sysvipc_find_ipc() should increase position index kernel/gcov/fs.c: gcov_seq_next() should increase position index fs/seq_file.c: seq_read(): add info message about buggy .next functions drivers/dma/tegra20-apb-dma.c: fix platform_get_irq.cocci warnings change email address for Pali Rohár selftests: kmod: test disabling module autoloading selftests: kmod: fix handling test numbers above 9 docs: admin-guide: document the kernel.modprobe sysctl fs/filesystems.c: downgrade user-reachable WARN_ONCE() to pr_warn_once() kmod: make request_module() return an error when autoloading is disabled mm/memremap: set caching mode for PCI P2PDMA memory to WC mm/memory_hotplug: add pgprot_t to mhp_params powerpc/mm: thread pgprot_t through create_section_mapping() x86/mm: introduce __set_memory_prot() x86/mm: thread pgprot_t through init_memory_mapping() mm/memory_hotplug: rename mhp_restrictions to mhp_params mm/memory_hotplug: drop the flags field from struct mhp_restrictions mm/special: create generic fallbacks for pte_special() and pte_mkspecial() mm/vma: introduce VM_ACCESS_FLAGS mm/vma: define a default value for VM_DATA_DEFAULT_FLAGS ...
2020-04-10Merge tag 'xtensa-20200410' of git://github.com/jcmvbkbc/linux-xtensaLinus Torvalds
Pull xtensa updates from Max Filippov: - replace setup_irq() by request_irq() - cosmetic fixes in xtensa Kconfig and boot/Makefile * tag 'xtensa-20200410' of git://github.com/jcmvbkbc/linux-xtensa: arch/xtensa: fix grammar in Kconfig help text xtensa: remove meaningless export ccflags-y xtensa: replace setup_irq() by request_irq()
2020-04-10Merge tag 'for-linus-5.7-rc1b-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull more xen updates from Juergen Gross: - two cleanups - fix a boot regression introduced in this merge window - fix wrong use of memory allocation flags * tag 'for-linus-5.7-rc1b-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: x86/xen: fix booting 32-bit pv guest x86/xen: make xen_pvmmu_arch_setup() static xen/blkfront: fix memory allocation flags in blkfront_setup_indirect() xen: Use evtchn_type_t as a type for event channels
2020-04-10change email address for Pali RohárPali Rohár
For security reasons I stopped using gmail account and kernel address is now up-to-date alias to my personal address. People periodically send me emails to address which they found in source code of drivers, so this change reflects state where people can contact me. [ Added .mailmap entry as per Joe Perches - Linus ] Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Joe Perches <joe@perches.com> Link: http://lkml.kernel.org/r/20200307104237.8199-1-pali@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10mm/memory_hotplug: add pgprot_t to mhp_paramsLogan Gunthorpe
devm_memremap_pages() is currently used by the PCI P2PDMA code to create struct page mappings for IO memory. At present, these mappings are created with PAGE_KERNEL which implies setting the PAT bits to be WB. However, on x86, an mtrr register will typically override this and force the cache type to be UC-. In the case firmware doesn't set this register it is effectively WB and will typically result in a machine check exception when it's accessed. Other arches are not currently likely to function correctly seeing they don't have any MTRR registers to fall back on. To solve this, provide a way to specify the pgprot value explicitly to arch_add_memory(). Of the arches that support MEMORY_HOTPLUG: x86_64, and arm64 need a simple change to pass the pgprot_t down to their respective functions which set up the page tables. For x86_32, set the page tables explicitly using _set_memory_prot() (seeing they are already mapped). For ia64, s390 and sh, reject anything but PAGE_KERNEL settings -- this should be fine, for now, seeing these architectures don't support ZONE_DEVICE. A check in __add_pages() is also added to ensure the pgprot parameter was set for all arches. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Dan Williams <dan.j.williams@intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Eric Badger <ebadger@gigaio.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200306170846.9333-7-logang@deltatee.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10powerpc/mm: thread pgprot_t through create_section_mapping()Logan Gunthorpe
In prepartion to support a pgprot_t argument for arch_add_memory(). Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Eric Badger <ebadger@gigaio.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Michal Hocko <mhocko@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200306170846.9333-6-logang@deltatee.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10x86/mm: introduce __set_memory_prot()Logan Gunthorpe
For use in the 32bit arch_add_memory() to set the pgprot type of the memory to add. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: David Hildenbrand <david@redhat.com> Cc: Eric Badger <ebadger@gigaio.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@suse.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200306170846.9333-5-logang@deltatee.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10x86/mm: thread pgprot_t through init_memory_mapping()Logan Gunthorpe
In preparation to support a pgprot_t argument for arch_add_memory(). It's required to move the prototype of init_memory_mapping() seeing the original location came before the definition of pgprot_t. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: David Hildenbrand <david@redhat.com> Cc: Eric Badger <ebadger@gigaio.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200306170846.9333-4-logang@deltatee.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10mm/memory_hotplug: rename mhp_restrictions to mhp_paramsLogan Gunthorpe
The mhp_restrictions struct really doesn't specify anything resembling a restriction anymore so rename it to be mhp_params as it is a list of extended parameters. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Eric Badger <ebadger@gigaio.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200306170846.9333-3-logang@deltatee.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10mm/special: create generic fallbacks for pte_special() and pte_mkspecial()Anshuman Khandual
Currently there are many platforms that dont enable ARCH_HAS_PTE_SPECIAL but required to define quite similar fallback stubs for special page table entry helpers such as pte_special() and pte_mkspecial(), as they get build in generic MM without a config check. This creates two generic fallback stub definitions for these helpers, eliminating much code duplication. mips platform has a special case where pte_special() and pte_mkspecial() visibility is wider than what ARCH_HAS_PTE_SPECIAL enablement requires. This restricts those symbol visibility in order to avoid redefinitions which is now exposed through this new generic stubs and subsequent build failure. arm platform set_pte_at() definition needs to be moved into a C file just to prevent a build failure. [anshuman.khandual@arm.com: use defined(CONFIG_ARCH_HAS_PTE_SPECIAL) in mips per Thomas] Link: http://lkml.kernel.org/r/1583851924-21603-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Guo Ren <guoren@kernel.org> [csky] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Stafford Horne <shorne@gmail.com> [openrisc] Acked-by: Helge Deller <deller@gmx.de> [parisc] Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Brian Cain <bcain@codeaurora.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Sam Creasey <sammy@sammy.net> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paulburton@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Greentime Hu <green.hu@gmail.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Link: http://lkml.kernel.org/r/1583802551-15406-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10mm/vma: introduce VM_ACCESS_FLAGSAnshuman Khandual
There are many places where all basic VMA access flags (read, write, exec) are initialized or checked against as a group. One such example is during page fault. Existing vma_is_accessible() wrapper already creates the notion of VMA accessibility as a group access permissions. Hence lets just create VM_ACCESS_FLAGS (VM_READ|VM_WRITE|VM_EXEC) which will not only reduce code duplication but also extend the VMA accessibility concept in general. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Salter <msalter@redhat.com> Cc: Nick Hu <nickhu@andestech.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Springer <rspringer@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Link: http://lkml.kernel.org/r/1583391014-8170-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10mm/vma: define a default value for VM_DATA_DEFAULT_FLAGSAnshuman Khandual
There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the existing VM_STACK_DEFAULT_FLAGS. While here, also define some more macros with standard VMA access flag combinations that are used frequently across many platforms. Apart from simplification, this reduces code duplication as well. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Salter <msalter@redhat.com> Cc: Guo Ren <guoren@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Brian Cain <bcain@codeaurora.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paulburton@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Jonas Bonn <jonas@southpole.se> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Rich Felker <dalias@libc.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jeff Dike <jdike@addtoit.com> Cc: Chris Zankel <chris@zankel.net> Link: http://lkml.kernel.org/r/1583391014-8170-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10mm: define pte_index as macro for x86Arjun Roy
pte_index() is either defined as a macro (e.g. sparc64) or as an inlined function (e.g. x86). vm_insert_pages() depends on pte_index but it is not defined on all platforms (e.g. m68k). To fix compilation of vm_insert_pages() on architectures not providing pte_index(), we perform the following fix: 0. For platforms where it is meaningful, and defined as a macro, no change is needed. 1. For platforms where it is meaningful and defined as an inlined function, and we want to use it with vm_insert_pages(), we define a degenerate macro of the form: #define pte_index pte_index 2. vm_insert_pages() checks for the existence of a pte_index macro definition. If found, it implements a batched insert. If not found, it devolves to calling vm_insert_page() in a loop. This patch implements step 1 for x86. v3 of this patch fixes a compilation warning for an unused method. v2 of this patch moved a macro definition to a more readable location. Signed-off-by: Arjun Roy <arjunroy@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Matthew Wilcox <willy@infradead.org> Cc: Soheil Hassas Yeganeh <soheil@google.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Link: http://lkml.kernel.org/r/20200228054714.204424-1-arjunroy.kdev@gmail.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10mm: bring sparc pte_index() semantics inline with other platformsArjun Roy
pte_index() on platforms other than sparc return a numerical index. On sparc, it returns a pte_t*. This presents an issue for vm_insert_pages(), which relies on pte_index() to find the offset for a pte within a pmd, for batched inserts. This patch: 1. Modifies pte_index() for sparc to return a numerical index, like other platforms, 2. Defines pte_entry() for sparc which returns a pte_t* (as pte_index() used to), 3. Converts existing sparc callers for pte_index() to use pte_entry(). [sfr@canb.auug.org.au: remove pte_entry and just directly modified pte_offset_kernel instead] Signed-off-by: Arjun Roy <arjunroy@google.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Cc: David Miller <davem@davemloft.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: Arjun Roy <arjunroy.kdev@gmail.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Link: http://lkml.kernel.org/r/20200227105045.6b421d9f@canb.auug.org.au Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-10mm: hugetlb: optionally allocate gigantic hugepages using cmaRoman Gushchin
Commit 944d9fec8d7a ("hugetlb: add support for gigantic page allocation at runtime") has added the run-time allocation of gigantic pages. However it actually works only at early stages of the system loading, when the majority of memory is free. After some time the memory gets fragmented by non-movable pages, so the chances to find a contiguous 1GB block are getting close to zero. Even dropping caches manually doesn't help a lot. At large scale rebooting servers in order to allocate gigantic hugepages is quite expensive and complex. At the same time keeping some constant percentage of memory in reserved hugepages even if the workload isn't using it is a big waste: not all workloads can benefit from using 1 GB pages. The following solution can solve the problem: 1) On boot time a dedicated cma area* is reserved. The size is passed as a kernel argument. 2) Run-time allocations of gigantic hugepages are performed using the cma allocator and the dedicated cma area In this case gigantic hugepages can be allocated successfully with a high probability, however the memory isn't completely wasted if nobody is using 1GB hugepages: it can be used for pagecache, anon memory, THPs, etc. * On a multi-node machine a per-node cma area is allocated on each node. Following gigantic hugetlb allocation are using the first available numa node if the mask isn't specified by a user. Usage: 1) configure the kernel to allocate a cma area for hugetlb allocations: pass hugetlb_cma=10G as a kernel argument 2) allocate hugetlb pages as usual, e.g. echo 10 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages If the option isn't enabled or the allocation of the cma area failed, the current behavior of the system is preserved. x86 and arm-64 are covered by this patch, other architectures can be trivially added later. The patch contains clean-ups and fixes proposed and implemented by Aslan Bakirov and Randy Dunlap. It also contains ideas and suggestions proposed by Rik van Riel, Michal Hocko and Mike Kravetz. Thanks! Signed-off-by: Roman Gushchin <guro@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Andreas Schaufler <andreas.schaufler@gmx.de> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Michal Hocko <mhocko@kernel.org> Cc: Aslan Bakirov <aslan@fb.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Rik van Riel <riel@surriel.com> Cc: Joonsoo Kim <js1304@gmail.com> Link: http://lkml.kernel.org/r/20200407163840.92263-3-guro@fb.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-11arch: nios2: remove 'resetvalue' propertyAlexandru Ardelean
The 'altr,pio-1.0' driver does not handle the 'resetvalue', so remove it. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Ley Foon Tan <ley.foon.tan@intel.com>
2020-04-11arch: nios2: rename 'altr,gpio-bank-width' -> 'altr,ngpio'Alexandru Ardelean
There is no more 'altr,gpio-bank-width' in the 'altr,pio-1.0' driver. There is a 'altr,ngpio' which is what the property wants to configure. This change updates all occurrences of 'altr,gpio-bank-width' to 'altr,ngpio'. Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Ley Foon Tan <ley.foon.tan@intel.com>
2020-04-11arch: nios2: Enable the common clk subsystem on Nios2Dragos Bogdan
This patch adds support for common clock framework on Nios2. Clock framework is commonly used in many drivers, and this patch makes it available for the entire architecture, not just on a per-driver basis. Signed-off-by: Beniamin Bia <beniamin.bia@analog.com> Signed-off-by: Dragos Bogdan <dragos.bogdan@analog.com> Signed-off-by: Ley Foon Tan <ley.foon.tan@intel.com>
2020-04-10Merge tag 'acpi-5.7-rc1-3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull more ACPI updates from Rafael Wysocki: "These prevent a false-positive static checker warning from triggering in the ACPI EC driver (Rafael Wysocki), fix white space in an ACPI document (Vilhelm Prytz) and add static annotation to one variable (Jason Yan)" * tag 'acpi-5.7-rc1-3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: ACPI, x86/boot: make acpi_nobgrt static Documentation: firmware-guide: ACPI: fix table alignment in namespace.rst ACPI: EC: Fix up fast path check in acpi_ec_add()
2020-04-10Merge tag 's390-5.7-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull more s390 updates from Vasily Gorbik: "Second round of s390 fixes and features for 5.7: - The rest of fallthrough; annotations conversion - Couple of fixes for ADD uevents in the common I/O layer - Minor refactoring of the queued direct I/O code" * tag 's390-5.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/cio: generate delayed uevent for vfio-ccw subchannels s390/cio: avoid duplicated 'ADD' uevents s390/qdio: clear DSCI early for polling drivers s390/qdio: inline shared_ind() s390/qdio: remove cdev from init_data s390/qdio: allow for non-contiguous SBAL array in init_data zfcp: inline zfcp_qdio_setup_init_data() s390/qdio: cleanly split alloc and establish s390/mm: use fallthrough;
2020-04-10Merge branches 'acpi-ec' and 'acpi-x86'Rafael J. Wysocki
* acpi-ec: ACPI: EC: Fix up fast path check in acpi_ec_add() * acpi-x86: ACPI, x86/boot: make acpi_nobgrt static