summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2018-07-30Merge branch 'locking-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking fixes from Ingo Molnar: "A paravirt UP-patching fix, and an I2C MUX driver lockdep warning fix" * 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: locking/pvqspinlock/x86: Use LOCK_PREFIX in __pv_queued_spin_unlock() assembly code i2c/mux, locking/core: Annotate the nested rt_mutex usage locking/rtmutex: Allow specifying a subclass for nested locking
2018-07-30Merge branch 'efi-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull EFI fix from Ingo Molnar: "An UEFI variables fix for SEV guests" * 'efi-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/efi: Access EFI MMIO data as unencrypted when SEV is active
2018-07-30x86/apic: Trivial coding style fixesYi Wang
There is inconsistent indenting in calibrate_APIC_clock() and activate_managed(). Remove the surplus TAB. Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jiang Biao <jiang.biao2@zte.com.cn> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: hpa@zytor.com Cc: douly.fnst@cn.fujitsu.com Cc: jgross@suse.com Cc: ville.syrjala@linux.intel.com Cc: len.brown@intel.com Cc: gregkh@linuxfoundation.org Cc: zhong.weidong@zte.com.cn Link: https://lkml.kernel.org/r/1532672103-32250-1-git-send-email-wang.yi59@zte.com.cn
2018-07-30x86/platform/UV: Mark memblock related init code and data correctlyDou Liyang
parse_mem_block_size() and mem_block_size are only used during init. Mark them accordingly. Fixes: d7609f4210cb ("x86/platform/UV: Add kernel parameter to set memory block size") Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: hpa@zytor.com Cc: Mike Travis <mike.travis@hpe.com> Cc: Andrew Banman <andrew.banman@hpe.com> Link: https://lkml.kernel.org/r/20180730075947.23023-1-douly.fnst@cn.fujitsu.com
2018-07-30x86/boot/KASLR: Make local variable mem_limit staticzhong jiang
Fix the following sparse warning: arch/x86/boot/compressed/kaslr.c:102:20: warning: symbol 'mem_limit' was not declared. Should it be static? Signed-off-by: zhong jiang <zhongjiang@huawei.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: <gregkh@linuxfoundation.org> Link: https://lkml.kernel.org/r/1532958273-47725-1-git-send-email-zhongjiang@huawei.com
2018-07-30MIPS: mscc: ocelot: add interrupt controller properties to GPIO controllerQuentin Schulz
The GPIO controller also serves as an interrupt controller for events on the GPIO it handles. An interrupt occurs whenever a GPIO line has changed. Signed-off-by: Quentin Schulz <quentin.schulz@bootlin.com> Acked-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/20015/ Cc: robh+dt@kernel.org Cc: mark.rutland@arm.com Cc: ralf@linux-mips.org Cc: jhogan@kernel.org Cc: linux-gpio@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: devicetree@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: thomas.petazzoni@bootlin.com
2018-07-30x86/kvmclock: Mark kvm_get_preset_lpj() as __initDou Liyang
kvm_get_preset_lpj() is only called from kvmclock_init(), so mark it __init as well. Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Pavel Tatashin <pasha.tatashin@oracle.com> Cc: <hpa@zytor.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář<rkrcmar@redhat.com> Cc: <kvm@vger.kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: kvm@vger.kernel.org Link: https://lkml.kernel.org/r/20180730075421.22830-3-douly.fnst@cn.fujitsu.com
2018-07-30x86/tsc: Consolidate init codeDou Liyang
Split out suplicated code from tsc_early_init() and tsc_init() into a common helper and fixup some comment typos. [ tglx: Massaged changelog and renamed function ] Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Pavel Tatashin <pasha.tatashin@oracle.com> Cc: <hpa@zytor.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/20180730075421.22830-2-douly.fnst@cn.fujitsu.com
2018-07-30MIPS: generic: Select MIPS_AUTO_PFN_OFFSETPaul Burton
Enable CONFIG_MIPS_AUTO_PFN_OFFSET for the generic platform, allowing it to avoid wasted book-keeping for pages with addresses lower than the physical base address of memory. This has a minimal impact on kernel text size, with 64r6el_defconfig gaining 0.1% in size as reported by bloat-o-meter: add/remove: 4/1 grow/shrink: 345/13 up/down: 9017/-392 (8625) Function old new delta pcpu_setup_first_chunk 1444 1780 +336 pcpu_alloc_first_chunk 864 1136 +272 start_kernel 1064 1288 +224 initcall_blacklist 224 372 +148 try_fill_recv 2088 2184 +96 ... Total: Before=8457273, After=8465898, chg +0.10% The gain for systems with large offsets to physical memory & the ability to continue using generic kernels on such systems seems well worth this small cost. Signed-off-by: Paul Burton <paul.burton@mips.com> Suggested-by: Vladimir Kondratiev <vladimir.kondratiev@intel.com> Patchwork: https://patchwork.linux-mips.org/patch/20049/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
2018-07-30MIPS: Allow auto-dection of ARCH_PFN_OFFSET & PHYS_OFFSETPaul Burton
On systems where physical memory begins at a non-zero address, defining PHYS_OFFSET (which influences ARCH_PFN_OFFSET) can save us time & memory by avoiding book-keeping for pages from address zero to the start of memory. Some MIPS platforms already make use of this, but with the definition of PHYS_OFFSET being compile-time constant it hasn't been possible to enable this optimization for a kernel which may run on systems with varying physical memory base addresses. Introduce a new Kconfig option CONFIG_MIPS_AUTO_PFN_OFFSET which, when enabled, makes ARCH_PFN_OFFSET a variable & detects it from the boot memory map (which for example may have been populated from DT). The relationship with PHYS_OFFSET is reversed, with PHYS_OFFSET now being based on ARCH_PFN_OFFSET. This is because ARCH_PFN_OFFSET is used far more often, so avoiding the need for runtime calculation gives us a smaller impact on kernel text size (0.1% rather than 0.15% for 64r6el_defconfig). Signed-off-by: Paul Burton <paul.burton@mips.com> Suggested-by: Vladimir Kondratiev <vladimir.kondratiev@intel.com> Patchwork: https://patchwork.linux-mips.org/patch/20048/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
2018-07-30MIPS: Fix ISA virt/bus conversion for non-zero PHYS_OFFSETPaul Burton
isa_virt_to_bus() & isa_bus_to_virt() claim to treat ISA bus addresses as being identical to physical addresses, but they fail to do so in the presence of a non-zero PHYS_OFFSET. Correct this by having them use virt_to_phys() & phys_to_virt(), which consolidates the calculations to one place & ensures that ISA bus addresses do indeed match physical addresses. Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/20047/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: Vladimir Kondratiev <vladimir.kondratiev@intel.com>
2018-07-30MIPS: Make (UN)CAC_ADDR() PHYS_OFFSET-agnosticPaul Burton
Converting an address between cached & uncached (typically addresses in (c)kseg0 & (c)kseg1 or 2 xkphys regions) should not depend upon PHYS_OFFSET in any way - we're converting from a virtual address in one unmapped region to a virtual address in another unmapped region. For some reason our CAC_ADDR() & UNCAC_ADDR() macros make use of PAGE_OFFSET, which typically includes PHYS_OFFSET. This means that platforms with a non-zero PHYS_OFFSET typically have to workaround miscalculation by these 2 macros by also defining UNCAC_BASE to a value that isn't really correct. It appears that an attempt has previously been made to address this with commit 3f4579252aa1 ("MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET") which was later undone by commit ed3ce16c3d2b ("Revert "MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET"") which also introduced the ar7 workaround. That attempt at a fix was roughly equivalent, but essentially caused the CAC_ADDR() & UNCAC_ADDR() macros to cancel out PHYS_OFFSET by adding & then subtracting it again. In his revert Leonid is correct that using PHYS_OFFSET makes no sense in the context of these macros, but appears to have missed its inclusion via PAGE_OFFSET which means PHYS_OFFSET actually had an effect after the revert rather than before it. Here we fix this by modifying CAC_ADDR() & UNCAC_ADDR() to stop using PAGE_OFFSET (& thus PHYS_OFFSET), instead using __pa() & __va() along with UNCAC_BASE. For UNCAC_ADDR(), __pa() will convert a cached address to a physical address which we can simply use as an offset from UNCAC_BASE to obtain an address in the uncached region. For CAC_ADDR() we can undo the effect of UNCAC_ADDR() by subtracting UNCAC_BASE and using __va() on the result. With this change made, remove definitions of UNCAC_BASE from the ar7 & pic32 platforms which appear to have defined them only to workaround this problem. Signed-off-by: Paul Burton <paul.burton@mips.com> References: 3f4579252aa1 ("MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET") References: ed3ce16c3d2b ("Revert "MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET"") Patchwork: https://patchwork.linux-mips.org/patch/20046/ Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: Vladimir Kondratiev <vladimir.kondratiev@intel.com>
2018-07-30arm64: kexec: machine_kexec should call __flush_icache_rangeDave Kleikamp
machine_kexec flushes the reboot_code_buffer from the icache after stopping the other cpus. Commit 3b8c9f1cdfc5 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings") added an IPI call to flush_icache_range, which causes a hang here, so replace the call with __flush_icache_range Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-07-30ARC: [plat-eznps] Add missing struct nps_host_reg_aux_dpcOfer Levi
Fixing compilation issue caused by missing struct nps_host_reg_aux_dpc definition. Fixes: 3f9cd874dcc87 ("ARC: [plat-eznps] avoid toggling of DPC register") Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ofer Levi <oferle@mellanox.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2018-07-30ARC: add SMP_CACHE_BYTES value validateEugeniy Paltsev
Check that SMP_CACHE_BYTES (and hence ARCH_DMA_MINALIGN) is larger or equal to any cache line length by comparing it with values previously read from ARC cache BCR registers. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2018-07-30arm64: svc: Ensure hardirq tracing is updated before returnWill Deacon
We always run userspace with interrupts enabled, but with the recent conversion of the syscall entry/exit code to C, we don't inform the hardirq tracing code that interrupts are about to become enabled by virtue of restoring the EL0 SPSR. This patch ensures that trace_hardirqs_on() is called on the syscall return path when we return to the assembly code with interrupts still disabled. Fixes: f37099b6992a ("arm64: convert syscall trace logic to C") Reported-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-07-30KVM: s390: Beautify skey enable checkJanosch Frank
Let's introduce an explicit check if skeys have already been enabled for the vcpu, so we don't have to check the mm context if we don't have the storage key facility. This lets us check for enablement without having to take the mm semaphore and thus speedup skey emulation. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2018-07-30powerpc/44x: Mark mmu_init_secondary() as __initAlexey Spirkov
mmu_init_secondary() calls ppc44x_pin_tlb() which is marked __init, leading to a warning: The function mmu_init_secondary() references the function __init ppc44x_pin_tlb(). There's no CPU hotplug support on 44x so mmu_init_secondary() will only be called at boot. Therefore we should mark it as __init. Signed-off-by: Alexey Spirkov <alexeis@astrosoft.ru> [mpe: Flesh out change log details] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc/mm: Don't report PUDs as memory leaks when using kmemleakMichael Ellerman
Paul Menzel reported that kmemleak was producing reports such as: unreferenced object 0xc0000000f8b80000 (size 16384): comm "init", pid 1, jiffies 4294937416 (age 312.240s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000d997deb7>] __pud_alloc+0x80/0x190 [<0000000087f2e8a3>] move_page_tables+0xbac/0xdc0 [<00000000091e51c2>] shift_arg_pages+0xc0/0x210 [<00000000ab88670c>] setup_arg_pages+0x22c/0x2a0 [<0000000060871529>] load_elf_binary+0x41c/0x1648 [<00000000ecd9d2d4>] search_binary_handler.part.11+0xbc/0x280 [<0000000034e0cdd7>] __do_execve_file.isra.13+0x73c/0x940 [<000000005f953a6e>] sys_execve+0x58/0x70 [<000000009700a858>] system_call+0x5c/0x70 Indicating that a PUD was being leaked. However what's really happening is that kmemleak is not able to recognise the references from the PGD to the PUD, because they are not fully qualified pointers. We can confirm that in xmon, eg: Find the task struct for pid 1 "init": 0:mon> P task_struct ->thread.ksp PID PPID S P CMD c0000001fe7c0000 c0000001fe803960 1 0 S 13 systemd Dump virtual address 0 to find the PGD: 0:mon> dv 0 c0000001fe7c0000 pgd @ 0xc0000000f8b01000 Dump the memory of the PGD: 0:mon> d c0000000f8b01000 c0000000f8b01000 00000000f8b90000 0000000000000000 |................| c0000000f8b01010 0000000000000000 0000000000000000 |................| c0000000f8b01020 0000000000000000 0000000000000000 |................| c0000000f8b01030 0000000000000000 00000000f8b80000 |................| ^^^^^^^^^^^^^^^^ There we can see the reference to our supposedly leaked PUD. But because it's missing the leading 0xc, kmemleak won't recognise it. We can confirm it's still in use by translating an address that is mapped via it: 0:mon> dv 7fff94000000 c0000001fe7c0000 pgd @ 0xc0000000f8b01000 pgdp @ 0xc0000000f8b01038 = 0x00000000f8b80000 <-- pudp @ 0xc0000000f8b81ff8 = 0x00000000037c4000 pmdp @ 0xc0000000037c5ca0 = 0x00000000fbd89000 ptep @ 0xc0000000fbd89000 = 0xc0800001d5ce0386 Maps physical address = 0x00000001d5ce0000 Flags = Accessed Dirty Read Write The fix is fairly simple. We need to tell kmemleak to ignore PUD allocations and never report them as leaks. We can also tell it not to scan the PGD, because it will never find pointers in there. However it will still notice if we allocate a PGD and then leak it. Reported-by: Paul Menzel <pmenzel@molgen.mpg.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Paul Menzel <pmenzel@molgen.mpg.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: split asm/tlbflush.hChristophe Leroy
Split asm/tlbflush.h into: asm/nohash/tlbflush.h asm/book3s/32/tlbflush.h asm/book3s/64/tlbflush.h (already existing) Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: remove unnecessary inclusion of asm/tlbflush.hChristophe Leroy
asm/tlbflush.h is only needed for: - using functions xxx_flush_tlb_xxx() - using MMU_NO_CONTEXT - including asm-generic/pgtable.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc/44x: remove page.h from mmu-44x.hChristophe Leroy
mmu-44x.h doesn't need asm/page.h if PAGE_SHIFT are replaced by CONFIG_PPC_XX_PAGES Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc/nohash: fix hash related comments in pgtable.hChristophe Leroy
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: fix includes in asm/processor.hChristophe Leroy
Remove superflous includes and add missing ones Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc/book3s: Remove PPC_PIN_SIZEChristophe Leroy
PPC_PIN_SIZE is specific to the 44x and is defined in mmu.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: declare set_breakpoint() staticChristophe Leroy
set_breakpoint() is only used in process.c so make it static Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: remove superflous inclusions of asm/fixmap.hChristophe Leroy
Files not using fixmap consts or functions don't need asm/fixmap.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: clean inclusions of asm/feature-fixups.hChristophe Leroy
files not using feature fixup don't need asm/feature-fixups.h files using feature fixup need asm/feature-fixups.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: clean the inclusion of stringify.hChristophe Leroy
Only include linux/stringify.h is files using __stringify() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: move ASM_CONST and stringify_in_c() into asm-const.hChristophe Leroy
This patch moves ASM_CONST() and stringify_in_c() into dedicated asm-const.h, then cleans all related inclusions. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: asm-compat.h should include asm-const.h] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc/405: move PPC405_ERR77 in asm-405.hChristophe Leroy
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: remove unneeded inclusions of cpu_has_feature.hChristophe Leroy
Files not using cpu_has_feature() don't need cpu_has_feature.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30powerpc: remove kdump.h from page.hChristophe Leroy
page.h doesn't need kdump.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-07-30x86/kexec: Allocate 8k PGDs for PTIJoerg Roedel
Fuzzing the PTI-x86-32 code with trinity showed unhandled kernel paging request oops-messages that looked a lot like silent data corruption. Lot's of debugging and testing lead to the kexec-32bit code, which is still allocating 4k PGDs when PTI is enabled. But since it uses native_set_pud() to build the page-table, it will unevitably call into __pti_set_user_pgtbl(), which writes beyond the allocated 4k page. Use PGD_ALLOCATION_ORDER to allocate PGDs in the kexec code to fix the issue. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: David H. Gutteridge <dhgutteridge@sympatico.ca> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: linux-mm@kvack.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Laight <David.Laight@aculab.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Eduardo Valentin <eduval@amazon.com> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Will Deacon <will.deacon@arm.com> Cc: aliguori@amazon.com Cc: daniel.gruss@iaik.tugraz.at Cc: hughd@google.com Cc: keescook@google.com Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Waiman Long <llong@redhat.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: joro@8bytes.org Link: https://lkml.kernel.org/r/1532533683-5988-4-git-send-email-joro@8bytes.org
2018-07-30x86/mm: Remove in_nmi() warning from vmalloc_fault()Joerg Roedel
It is perfectly okay to take page-faults, especially on the vmalloc area while executing an NMI handler. Remove the warning. Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: David H. Gutteridge <dhgutteridge@sympatico.ca> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: linux-mm@kvack.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Laight <David.Laight@aculab.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Eduardo Valentin <eduval@amazon.com> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Will Deacon <will.deacon@arm.com> Cc: aliguori@amazon.com Cc: daniel.gruss@iaik.tugraz.at Cc: hughd@google.com Cc: keescook@google.com Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Waiman Long <llong@redhat.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: joro@8bytes.org Link: https://lkml.kernel.org/r/1532533683-5988-2-git-send-email-joro@8bytes.org
2018-07-30ARM: 8785/1: use compiler built-ins for ffs and flsNicolas Pitre
On ARMv5 and above, it is beneficial to use compiler built-ins such as __builtin_ffs() and __builtin_ctzl() to implement ffs(), __ffs(), fls() and __fls(). The compiler does inline the clz instruction and even the rbit instruction when available, or provide a constant value when possible. On ARMv4 the compiler calls out to helper functions for those built-ins so it is best to keep the open coded versions in that case. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-07-30ARM: 8784/1: NOMMU: Allow enter in Hyp modeVladimir Murzin
ARMv8R adds support for virtualisation extension (with some deviation from v8A). With this patch hyp-unaware boot code can offload to kernel setting up HYP stuff in a sane state. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-07-30ARM: 8783/1: NOMMU: Extend check for VBAR supportVladimir Murzin
ARMv8R adds support for VBAR and updates ID_PFR1 with the new filed Sec_frac (bits [23:20]): Security fractional field. When the Security field is 0000, determines the support for features from the ARMv7 Security Extensions. Permitted values are: 0000 No features from the ARMv7 Security Extensions are implemented. This value is not supported in ARMv8 if ID_PFR1 bits [7:4] are zero. 0001 The implementation includes the VBAR, and the TCR.PD0 and TCR.PD1 bits. 0010 As for 0001, plus the ability to access Secure or Non-secure physical memory is supported. All other values are reserved. This field is only valid when ID_PFR1[7:4] == 0, otherwise it holds the value 0000. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-07-30ARM: 8782/1: vfp: clean up arch/arm/vfp/MakefileMasahiro Yamada
Since commit 799c43415442 ("kbuild: thin archives make default for all archs"), $(AR) is used instead of $(LD) to combine object files. The following code in arch/arm/vfp/Makefile: LDFLAGS +=--no-warn-mismatch ... is no longer used. Also, arch/arm/Makefile already guards arch/arm/vfp/ by a boolean symbol, CONFIG_VFP, like this: core-$(CONFIG_VFP) += arch/arm/vfp/ So, $(CONFIG_VFP) is always evaluated to y in arch/arm/vfp/Makefile. There is no point to use pseudo object, vfp.o, which never becomes a module. Add all objects to obj-y directly. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-07-30ARM: 8781/1: Fix Thumb-2 syscall return for binutils 2.29+Vincent Whitchurch
When building the kernel as Thumb-2 with binutils 2.29 or newer, if the assembler has seen the .type directive (via ENDPROC()) for a symbol, it automatically handles the setting of the lowest bit when the symbol is used with ADR. The badr macro on the other hand handles this lowest bit manually. This leads to a jump to a wrong address in the wrong state in the syscall return path: Internal error: Oops - undefined instruction: 0 [#2] SMP THUMB2 Modules linked in: CPU: 0 PID: 652 Comm: modprobe Tainted: G D 4.18.0-rc3+ #8 PC is at ret_fast_syscall+0x4/0x62 LR is at sys_brk+0x109/0x128 pc : [<80101004>] lr : [<801c8a35>] psr: 60000013 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 50c5387d Table: 9e82006a DAC: 00000051 Process modprobe (pid: 652, stack limit = 0x(ptrval)) 80101000 <ret_fast_syscall>: 80101000: b672 cpsid i 80101002: f8d9 2008 ldr.w r2, [r9, #8] 80101006: f1b2 4ffe cmp.w r2, #2130706432 ; 0x7f000000 80101184 <local_restart>: 80101184: f8d9 a000 ldr.w sl, [r9] 80101188: e92d 0030 stmdb sp!, {r4, r5} 8010118c: f01a 0ff0 tst.w sl, #240 ; 0xf0 80101190: d117 bne.n 801011c2 <__sys_trace> 80101192: 46ba mov sl, r7 80101194: f5ba 7fc8 cmp.w sl, #400 ; 0x190 80101198: bf28 it cs 8010119a: f04f 0a00 movcs.w sl, #0 8010119e: f3af 8014 nop.w {20} 801011a2: f2af 1ea2 subw lr, pc, #418 ; 0x1a2 To fix this, add a new symbol name which doesn't have ENDPROC used on it and use that with badr. We can't remove the badr usage since that would would cause breakage with older binutils. Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-07-30KVM: s390: Add skey emulation fault handlingJanosch Frank
When doing skey emulation for huge guests, we now need to fault in pmds, as we don't have PGSTES anymore to store them when we do not have valid table entries. Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
2018-07-30s390/mm: Add huge pmd storage key handlingJanosch Frank
Storage keys for guests with huge page mappings have to be managed in hardware. There are no PGSTEs for PMDs that we could use to retain the guests's logical view of the key. Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com>
2018-07-30s390/mm: Clear skeys for newly mapped huge guest pmdsJanosch Frank
Similarly to the pte skey handling, where we set the storage key to the default key for each newly mapped pte, we have to also do that for huge pmds. With the PG_arch_1 flag we keep track if the area has already been cleared of its skeys. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-07-30s390/mm: Clear huge page storage keys on enable_skeyDominik Dingel
When a guest starts using storage keys, we trap and set a default one for its whole valid address space. With this patch we are now able to do that for large pages. To speed up the storage key insertion, we use __storage_key_init_range, which in-turn will use sske_frame to set multiple storage keys with one instruction. As it has been previously used for debuging we have to get rid of the default key check and make it quiescing. Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com> [replaced page_set_storage_key loop with __storage_key_init_range] Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com>
2018-07-30s390/mm: Add huge page dirty sync supportJanosch Frank
To do dirty loging with huge pages, we protect huge pmds in the gmap. When they are written to, we unprotect them and mark them dirty. We introduce the function gmap_test_and_clear_dirty_pmd which handles dirty sync for huge pages. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Acked-by: David Hildenbrand <david@redhat.com>
2018-07-30s390/mm: Add gmap pmd invalidation and clearingJanosch Frank
If the host invalidates a pmd, we also have to invalidate the corresponding gmap pmds, as well as flush them from the TLB. This is necessary, as we don't share the pmd tables between host and guest as we do with ptes. The clearing part of these three new functions sets a guest pmd entry to _SEGMENT_ENTRY_EMPTY, so the guest will fault on it and we will re-link it. Flushing the gmap is not necessary in the host's lazy local and csp cases. Both purge the TLB completely. Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: David Hildenbrand <david@redhat.com>
2018-07-30s390/mm: Add gmap pmd notification bit settingJanosch Frank
Like for ptes, we also need invalidation notification for pmds, to make sure the guest lowcore pages are always accessible and later addition of shadowed pmds. With PMDs we do not have PGSTEs or some other bits we could use in the host PMD. Instead we pick one of the free bits in the gmap PMD. Every time a host pmd will be invalidated, we will check if the respective gmap PMD has the bit set and in that case fire up the notifier. Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
2018-07-30s390/mm: Add gmap pmd linkingJanosch Frank
Let's allow pmds to be linked into gmap for the upcoming s390 KVM huge page support. Before this patch we copied the full userspace pmd entry. This is not correct, as it contains SW defined bits that might be interpreted differently in the GMAP context. Now we only copy over all hardware relevant information leaving out the software bits. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com>
2018-07-30s390/mm: Abstract gmap notify bit settingJanosch Frank
Currently we use the software PGSTE bits PGSTE_IN_BIT and PGSTE_VSIE_BIT to notify before an invalidation occurs on a prefix page or a VSIE page respectively. Both bits are pgste specific, but are used when protecting a memory range. Let's introduce abstract GMAP_NOTIFY_* bits that will be realized into the respective bits when gmap DAT table entries are protected. Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com>
2018-07-30s390/mm: Make gmap_protect_range more modularJanosch Frank
This patch reworks the gmap_protect_range logic and extracts the pte handling into an own function. Also we do now walk to the pmd and make it accessible in the function for later use. This way we can add huge page handling logic more easily. Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>