summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2019-02-28x86/boot/compressed/64: Do not read legacy ROM on EFI systemKirill A. Shutemov
EFI systems do not necessarily provide a legacy ROM. If the ROM is missing the memory is not mapped at all. Trying to dereference values in the legacy ROM area leads to a crash on Macbook Pro. Only look for values in the legacy ROM area for non-EFI system. Fixes: 3548e131ec6a ("x86/boot/compressed/64: Find a place for 32-bit trampoline") Reported-by: Pitam Mitra <pitamm@gmail.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Bockjoo Kim <bockjoo@phys.ufl.edu> Cc: bp@alien8.de Cc: hpa@zytor.com Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190219075224.35058-1-kirill.shutemov@linux.intel.com Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=202351
2019-02-28x86, retpolines: Raise limit for generating indirect calls from switch-caseDaniel Borkmann
From networking side, there are numerous attempts to get rid of indirect calls in fast-path wherever feasible in order to avoid the cost of retpolines, for example, just to name a few: * 283c16a2dfd3 ("indirect call wrappers: helpers to speed-up indirect calls of builtin") * aaa5d90b395a ("net: use indirect call wrappers at GRO network layer") * 028e0a476684 ("net: use indirect call wrappers at GRO transport layer") * 356da6d0cde3 ("dma-mapping: bypass indirect calls for dma-direct") * 09772d92cd5a ("bpf: avoid retpoline for lookup/update/delete calls on maps") * 10870dd89e95 ("netfilter: nf_tables: add direct calls for all builtin expressions") [...] Recent work on XDP from Björn and Magnus additionally found that manually transforming the XDP return code switch statement with more than 5 cases into if-else combination would result in a considerable speedup in XDP layer due to avoidance of indirect calls in CONFIG_RETPOLINE enabled builds. On i40e driver with XDP prog attached, a 20-26% speedup has been observed [0]. Aside from XDP, there are many other places later in the networking stack's critical path with similar switch-case processing. Rather than fixing every XDP-enabled driver and locations in stack by hand, it would be good to instead raise the limit where gcc would emit expensive indirect calls from the switch under retpolines and stick with the default as-is in case of !retpoline configured kernels. This would also have the advantage that for archs where this is not necessary, we let compiler select the underlying target optimization for these constructs and avoid potential slow-downs by if-else hand-rewrite. In case of gcc, this setting is controlled by case-values-threshold which has an architecture global default that selects 4 or 5 (latter if target does not have a case insn that compares the bounds) where some arch back ends like arm64 or s390 override it with their own target hooks, for example, in gcc commit db7a90aa0de5 ("S/390: Disable prediction of indirect branches") the threshold pretty much disables jump tables by limit of 20 under retpoline builds. Comparing gcc's and clang's default code generation on x86-64 under O2 level with retpoline build results in the following outcome for 5 switch cases: * gcc with -mindirect-branch=thunk-inline -mindirect-branch-register: # gdb -batch -ex 'disassemble dispatch' ./c-switch Dump of assembler code for function dispatch: 0x0000000000400be0 <+0>: cmp $0x4,%edi 0x0000000000400be3 <+3>: ja 0x400c35 <dispatch+85> 0x0000000000400be5 <+5>: lea 0x915f8(%rip),%rdx # 0x4921e4 0x0000000000400bec <+12>: mov %edi,%edi 0x0000000000400bee <+14>: movslq (%rdx,%rdi,4),%rax 0x0000000000400bf2 <+18>: add %rdx,%rax 0x0000000000400bf5 <+21>: callq 0x400c01 <dispatch+33> 0x0000000000400bfa <+26>: pause 0x0000000000400bfc <+28>: lfence 0x0000000000400bff <+31>: jmp 0x400bfa <dispatch+26> 0x0000000000400c01 <+33>: mov %rax,(%rsp) 0x0000000000400c05 <+37>: retq 0x0000000000400c06 <+38>: nopw %cs:0x0(%rax,%rax,1) 0x0000000000400c10 <+48>: jmpq 0x400c90 <fn_3> 0x0000000000400c15 <+53>: nopl (%rax) 0x0000000000400c18 <+56>: jmpq 0x400c70 <fn_2> 0x0000000000400c1d <+61>: nopl (%rax) 0x0000000000400c20 <+64>: jmpq 0x400c50 <fn_1> 0x0000000000400c25 <+69>: nopl (%rax) 0x0000000000400c28 <+72>: jmpq 0x400c40 <fn_0> 0x0000000000400c2d <+77>: nopl (%rax) 0x0000000000400c30 <+80>: jmpq 0x400cb0 <fn_4> 0x0000000000400c35 <+85>: push %rax 0x0000000000400c36 <+86>: callq 0x40dd80 <abort> End of assembler dump. * clang with -mretpoline emitting search tree: # gdb -batch -ex 'disassemble dispatch' ./c-switch Dump of assembler code for function dispatch: 0x0000000000400b30 <+0>: cmp $0x1,%edi 0x0000000000400b33 <+3>: jle 0x400b44 <dispatch+20> 0x0000000000400b35 <+5>: cmp $0x2,%edi 0x0000000000400b38 <+8>: je 0x400b4d <dispatch+29> 0x0000000000400b3a <+10>: cmp $0x3,%edi 0x0000000000400b3d <+13>: jne 0x400b52 <dispatch+34> 0x0000000000400b3f <+15>: jmpq 0x400c50 <fn_3> 0x0000000000400b44 <+20>: test %edi,%edi 0x0000000000400b46 <+22>: jne 0x400b5c <dispatch+44> 0x0000000000400b48 <+24>: jmpq 0x400c20 <fn_0> 0x0000000000400b4d <+29>: jmpq 0x400c40 <fn_2> 0x0000000000400b52 <+34>: cmp $0x4,%edi 0x0000000000400b55 <+37>: jne 0x400b66 <dispatch+54> 0x0000000000400b57 <+39>: jmpq 0x400c60 <fn_4> 0x0000000000400b5c <+44>: cmp $0x1,%edi 0x0000000000400b5f <+47>: jne 0x400b66 <dispatch+54> 0x0000000000400b61 <+49>: jmpq 0x400c30 <fn_1> 0x0000000000400b66 <+54>: push %rax 0x0000000000400b67 <+55>: callq 0x40dd20 <abort> End of assembler dump. For sake of comparison, clang without -mretpoline: # gdb -batch -ex 'disassemble dispatch' ./c-switch Dump of assembler code for function dispatch: 0x0000000000400b30 <+0>: cmp $0x4,%edi 0x0000000000400b33 <+3>: ja 0x400b57 <dispatch+39> 0x0000000000400b35 <+5>: mov %edi,%eax 0x0000000000400b37 <+7>: jmpq *0x492148(,%rax,8) 0x0000000000400b3e <+14>: jmpq 0x400bf0 <fn_0> 0x0000000000400b43 <+19>: jmpq 0x400c30 <fn_4> 0x0000000000400b48 <+24>: jmpq 0x400c10 <fn_2> 0x0000000000400b4d <+29>: jmpq 0x400c20 <fn_3> 0x0000000000400b52 <+34>: jmpq 0x400c00 <fn_1> 0x0000000000400b57 <+39>: push %rax 0x0000000000400b58 <+40>: callq 0x40dcf0 <abort> End of assembler dump. Raising the cases to a high number (e.g. 100) will still result in similar code generation pattern with clang and gcc as above, in other words clang generally turns off jump table emission by having an extra expansion pass under retpoline build to turn indirectbr instructions from their IR into switch instructions as a built-in -mno-jump-table lowering of a switch (in this case, even if IR input already contained an indirect branch). For gcc, adding --param=case-values-threshold=20 as in similar fashion as s390 in order to raise the limit for x86 retpoline enabled builds results in a small vmlinux size increase of only 0.13% (before=18,027,528 after=18,051,192). For clang this option is ignored due to i) not being needed as mentioned and ii) not having above cmdline parameter. Non-retpoline-enabled builds with gcc continue to use the default case-values-threshold setting, so nothing changes here. [0] https://lore.kernel.org/netdev/20190129095754.9390-1-bjorn.topel@gmail.com/ and "The Path to DPDK Speeds for AF_XDP", LPC 2018, networking track: - http://vger.kernel.org/lpc_net2018_talks/lpc18_pres_af_xdp_perf-v3.pdf - http://vger.kernel.org/lpc_net2018_talks/lpc18_paper_af_xdp_perf-v2.pdf Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Björn Töpel <bjorn.topel@intel.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: netdev@vger.kernel.org Cc: David S. Miller <davem@davemloft.net> Cc: Magnus Karlsson <magnus.karlsson@intel.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Link: https://lkml.kernel.org/r/20190221221941.29358-1-daniel@iogearbox.net
2019-02-28x86/hyper-v: Fix definition of HV_MAX_FLUSH_REP_COUNTLan Tianyu
The max flush rep count of HvFlushGuestPhysicalAddressList hypercall is equal with how many entries of union hv_gpa_page_range can be populated into the input parameter page. The code lacks parenthesis around PAGE_SIZE - 2 * sizeof(u64) which results in bogus computations. Add them. Fixes: cc4edae4b924 ("x86/hyper-v: Add HvFlushGuestAddressList hypercall support") Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: kys@microsoft.com Cc: haiyangz@microsoft.com Cc: sthemmin@microsoft.com Cc: sashal@kernel.org Cc: bp@alien8.de Cc: hpa@zytor.com Cc: gregkh@linuxfoundation.org Cc: devel@linuxdriverproject.org Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190225143114.5149-1-Tianyu.Lan@microsoft.com
2019-02-28x86/Hyper-V: Set x2apic destination mode to physical when x2apic is availableLan Tianyu
Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic, set x2apic destination mode to physcial mode when x2apic is available and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have 8-bit APIC id. Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-02-28kernfs, sysfs, cgroup, intel_rdt: Support fs_contextDavid Howells
Make kernfs support superblock creation/mount/remount with fs_context. This requires that sysfs, cgroup and intel_rdt, which are built on kernfs, be made to support fs_context also. Notes: (1) A kernfs_fs_context struct is created to wrap fs_context and the kernfs mount parameters are moved in here (or are in fs_context). (2) kernfs_mount{,_ns}() are made into kernfs_get_tree(). The extra namespace tag parameter is passed in the context if desired (3) kernfs_free_fs_context() is provided as a destructor for the kernfs_fs_context struct, but for the moment it does nothing except get called in the right places. (4) sysfs doesn't wrap kernfs_fs_context since it has no parameters to pass, but possibly this should be done anyway in case someone wants to add a parameter in future. (5) A cgroup_fs_context struct is created to wrap kernfs_fs_context and the cgroup v1 and v2 mount parameters are all moved there. (6) cgroup1 parameter parsing error messages are now handled by invalf(), which allows userspace to collect them directly. (7) cgroup1 parameter cleanup is now done in the context destructor rather than in the mount/get_tree and remount functions. Weirdies: (*) cgroup_do_get_tree() calls cset_cgroup_from_root() with locks held, but then uses the resulting pointer after dropping the locks. I'm told this is okay and needs commenting. (*) The cgroup refcount web. This really needs documenting. (*) cgroup2 only has one root? Add a suggestion from Thomas Gleixner in which the RDT enablement code is placed into its own function. [folded a leak fix from Andrey Vagin] Signed-off-by: David Howells <dhowells@redhat.com> cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> cc: Tejun Heo <tj@kernel.org> cc: Li Zefan <lizefan@huawei.com> cc: Johannes Weiner <hannes@cmpxchg.org> cc: cgroups@vger.kernel.org cc: fenghua.yu@intel.com Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2019-02-28Merge tag 'perf-core-for-mingo-5.1-20190225' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo: perf annotate: Wei Li: - Fix getting source line failure perf script: Andi Kleen: - Handle missing fields with -F +... perf data: Jiri Olsa: - Prep work to support per-cpu files in a directory. Intel PT: Adrian Hunter: - Improve thread_stack__no_call_return() - Hide x86 retpolines in thread stacks. - exported SQL viewer refactorings, new 'top calls' report.. Alexander Shishkin: - Copy parent's address filter offsets on clone - Fix address filters for vmas with non-zero offset. Applies to ARM's CoreSight as well. python scripts: Tony Jones: - Python3 support for several 'perf script' python scripts. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-28Merge branch 'linus' into perf/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-28Merge branch 'linus' into locking/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-28crypto: arm64/chacha - fix hchacha_block_neon() for big endianEric Biggers
On big endian arm64 kernels, the xchacha20-neon and xchacha12-neon self-tests fail because hchacha_block_neon() outputs little endian words but the C code expects native endianness. Fix it to output the words in native endianness (which also makes it match the arm32 version). Fixes: cc7cf991e9eb ("crypto: arm64/chacha20 - add XChaCha20 support") Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-28crypto: arm64/chacha - fix chacha_4block_xor_neon() for big endianEric Biggers
The change to encrypt a fifth ChaCha block using scalar instructions caused the chacha20-neon, xchacha20-neon, and xchacha12-neon self-tests to start failing on big endian arm64 kernels. The bug is that the keystream block produced in 32-bit scalar registers is directly XOR'd with the data words, which are loaded and stored in native endianness. Thus in big endian mode the data bytes end up XOR'd with the wrong bytes. Fix it by byte-swapping the keystream words in big endian mode. Fixes: 2fe55987b262 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed") Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-28crypto: x86/poly1305 - Clear key material from stack in SSE2 variantTommi Hirvola
1-block SSE2 variant of poly1305 stores variables s1..s4 containing key material on the stack. This commit adds missing zeroing of the stack memory. Benchmarks show negligible performance hit (tested on i7-3770). Signed-off-by: Tommi Hirvola <tommi@hirvola.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-27MIPS: fix memory setup for platforms with PHYS_OFFSET != 0Thomas Bogendoerfer
For platforms, which use a PHYS_OFFSET != 0, symbol _end also contains that offset. So when calling memblock_reserve() for reserving kernel the size argument needs to be adjusted. Fixes: bcec54bf3118 ("mips: switch to NO_BOOTMEM") Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: stable@vger.kernel.org # v4.20+
2019-02-28powerpc/powernv/ioda: Fix locked_vm counting for memory used by IOMMU tablesAlexey Kardashevskiy
We store 2 multilevel tables in iommu_table - one for the hardware and one with the corresponding userspace addresses. Before allocating the tables, the iommu_table_group_ops::get_table_size() hook returns the combined size of the two and VFIO SPAPR TCE IOMMU driver adjusts the locked_vm counter correctly. When the table is actually allocated, the amount of allocated memory is stored in iommu_table::it_allocated_size and used to decrement the locked_vm counter when we release the memory used by the table; .get_table_size() and .create_table() calculate it independently but the result is expected to be the same. However the allocator does not add the userspace table size to .it_allocated_size so when we destroy the table because of VFIO PCI unplug (i.e. VFIO container is gone but the userspace keeps running), we decrement locked_vm by just a half of size of memory we are releasing. To make things worse, since we enabled on-demand allocation of indirect levels, it_allocated_size contains only the amount of memory actually allocated at the table creation time which can just be a fraction. It is not a problem with incrementing locked_vm (as get_table_size() value is used) but it is with decrementing. As the result, we leak locked_vm and may not be able to allocate more IOMMU tables after few iterations of hotplug/unplug. This sets it_allocated_size in the pnv_pci_ioda2_ops::create_table() hook to what pnv_pci_ioda2_get_table_size() returns so from now on we have a single place which calculates the maximum memory a table can occupy. The original meaning of it_allocated_size is somewhat lost now though. We do not ditch it_allocated_size whatsoever here and we do not call get_table_size() from vfio_iommu_spapr_tce.c when decrementing locked_vm as we may have multiple IOMMU groups per container and even though they all are supposed to have the same get_table_size() implementation, there is a small chance for failure or confusion. Fixes: 090bad39b237 ("powerpc/powernv: Add indirect levels to it_userspace") Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-27Merge tag 'y2038-syscall-abi' of ↵Thomas Gleixner
git://git.kernel.org/pub/scm/linux/kernel/git/arnd/playground into timers/2038 Pull additional syscall ABI cleanup for y2038 from Arnd Bergmann: This is a follow-up to the y2038 syscall patches already merged in the tip tree. As the final 32-bit RISC-V syscall ABI is still being decided on, this is the last chance to make a few corrections to leave out interfaces based on 32-bit time_t along with the old off_t and rlimit types. The series achieves this in a few steps: - A couple of bug fixes for minor regressions I introduced in the original series - A couple of older patches from Yury Norov that I had never merged in the past, these fix up the openat/open_by_handle_at and getrlimit/setrlimit syscalls to disallow the old versions of off_t and rlimit. - Hiding the deprecated system calls behind an #ifdef in include/uapi/asm-generic/unistd.h - Change arch/riscv to drop all these ABIs. Originally, the plan was to also leave these out on C-Sky, but that now has a glibc port that uses the older interfaces, so we need to leave them in place.
2019-02-27powerpc/fsl: Fix the flush of branch predictor.Christophe Leroy
The commit identified below adds MC_BTB_FLUSH macro only when CONFIG_PPC_FSL_BOOK3E is defined. This results in the following error on some configs (seen several times with kisskb randconfig_defconfig) arch/powerpc/kernel/exceptions-64e.S:576: Error: Unrecognized opcode: `mc_btb_flush' make[3]: *** [scripts/Makefile.build:367: arch/powerpc/kernel/exceptions-64e.o] Error 1 make[2]: *** [scripts/Makefile.build:492: arch/powerpc/kernel] Error 2 make[1]: *** [Makefile:1043: arch/powerpc] Error 2 make: *** [Makefile:152: sub-make] Error 2 This patch adds a blank definition of MC_BTB_FLUSH for other cases. Fixes: 10c5e83afd4a ("powerpc/fsl: Flush the branch predictor at each kernel entry (64bit)") Cc: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Daniel Axtens <dja@axtens.net> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-27powerpc/powernv: Make opal log only readable by rootJordan Niethe
Currently the opal log is globally readable. It is kernel policy to limit the visibility of physical addresses / kernel pointers to root. Given this and the fact the opal log may contain this information it would be better to limit the readability to root. Fixes: bfc36894a48b ("powerpc/powernv: Add OPAL message log interface") Cc: stable@vger.kernel.org # v3.15+ Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Stewart Smith <stewart@linux.ibm.com> Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-27arm64: dts: rockchip: move QCA6174A wakeup pin into its USB nodeBrian Norris
Currently, we don't coordinate BT USB activity with our handling of the BT out-of-band wake pin, and instead just use gpio-keys. That causes problems because we have no way of distinguishing wake activity due to a BT device (e.g., mouse) vs. the BT controller (e.g., re-configuring wake mask before suspend). This can cause spurious wake events just because we, for instance, try to reconfigure the host controller's event mask before suspending. We can avoid these synchronization problems by handling the BT wake pin directly in the btusb driver -- for all activity up until BT controller suspend(), we simply listen to normal USB activity (e.g., to know the difference between device and host activity); once we're really ready to suspend the host controller, there should be no more host activity, and only *then* do we unmask the GPIO interrupt. This is already supported by btusb; we just need to describe the wake pin in the right node. We list 2 compatible properties, since both PID/VID pairs show up on Scarlet devices, and they're both essentially identical QCA6174A-based modules. Also note that the polarity was wrong before: Qualcomm implemented WAKE as active high, not active low. We only got away with this because gpio-keys always reconfigured us as bi-directional edge-triggered. Finally, we have an external pull-up and a level-shifter on this line (we didn't notice Qualcomm's polarity in the initial design), so we can't do pull-down. Switch to pull-none. Signed-off-by: Brian Norris <briannorris@chromium.org> Reviewed-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2019-02-26arm64: dts: qcom: msm8998: Extend TZ reserved memory areaMarc Gonzalez
My console locks up as soon as Linux writes to [88800000,88f00000[ AFAIU, that memory area is reserved for trustzone. Extend TZ reserved memory range, to prevent Linux from stepping on trustzone's toes. Cc: stable@vger.kernel.org # 4.20+ Reviewed-by: Sibi Sankar <sibis@codeaurora.org> Fixes: c7833949564ec ("arm64: dts: qcom: msm8998: Add smem related nodes") Signed-off-by: Marc Gonzalez <marc.w.gonzalez@free.fr> Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-02-27KVM: PPC: Fix compilation when KVM is not enabledPaul Mackerras
Compiling with CONFIG_PPC_POWERNV=y and KVM disabled currently gives an error like this: CC arch/powerpc/kernel/dbell.o In file included from arch/powerpc/kernel/dbell.c:20:0: arch/powerpc/include/asm/kvm_ppc.h: In function ‘xics_on_xive’: arch/powerpc/include/asm/kvm_ppc.h:625:9: error: implicit declaration of function ‘xive_enabled’ [-Werror=implicit-function-declaration] return xive_enabled() && cpu_has_feature(CPU_FTR_HVMODE); ^ cc1: all warnings being treated as errors scripts/Makefile.build:276: recipe for target 'arch/powerpc/kernel/dbell.o' failed make[3]: *** [arch/powerpc/kernel/dbell.o] Error 1 Fix this by making the xics_on_xive() definition conditional on the same symbol (CONFIG_KVM_BOOK3S_64_HANDLER) that determines whether we include <asm/xive.h> or not, since that's the header that defines xive_enabled(). Fixes: 03f953329bd8 ("KVM: PPC: Book3S: Allow XICS emulation to work in nested hosts using XIVE") Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2019-02-26arm64: Rename get_thread_info()Julien Thierry
The assembly macro get_thread_info() actually returns a task_struct and is analogous to the current/get_current macro/function. While it could be argued that thread_info sits at the start of task_struct and the intention could have been to return a thread_info, instances of loads from/stores to the address obtained from get_thread_info() use offsets that are generated with offsetof(struct task_struct, [...]). Rename get_thread_info() to state it returns a task_struct. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-02-26arm64: Remove documentation about TIF_USEDFPUJulien Grall
TIF_USEDFPU is not defined as thread flags for Arm64. So drop it from the documentation. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Julien Grall <julien.grall@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-02-26powerpc/xmon: Fix opcode being uninitialized in print_insn_powerpcNathan Chancellor
When building with -Wsometimes-uninitialized, Clang warns: arch/powerpc/xmon/ppc-dis.c:157:7: warning: variable 'opcode' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] if (cpu_has_feature(CPU_FTRS_POWER9)) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/powerpc/xmon/ppc-dis.c:167:7: note: uninitialized use occurs here if (opcode == NULL) ^~~~~~ arch/powerpc/xmon/ppc-dis.c:157:3: note: remove the 'if' if its condition is always true if (cpu_has_feature(CPU_FTRS_POWER9)) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/powerpc/xmon/ppc-dis.c:132:38: note: initialize the variable 'opcode' to silence this warning const struct powerpc_opcode *opcode; ^ = NULL 1 warning generated. This warning seems to make no sense on the surface because opcode is set to NULL right below this statement. However, there is a comma instead of semicolon to end the dialect assignment, meaning that the opcode assignment only happens in the if statement. Properly terminate that line so that Clang no longer warns. Fixes: 5b102782c7f4 ("powerpc/xmon: Enable disassembly files (compilation changes)") Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-26powerpc/powernv: move OPAL call wrapper tracing and interrupt handling to CNicholas Piggin
The OPAL call wrapper gets interrupt disabling wrong. It disables interrupts just by clearing MSR[EE], which has two problems: - It doesn't call into the IRQ tracing subsystem, which means tracing across OPAL calls does not always notice IRQs have been disabled. - It doesn't go through the IRQ soft-mask code, which causes a minor bug. MSR[EE] can not be restored by saving the MSR then clearing MSR[EE], because a racing interrupt while soft-masked could clear MSR[EE] between the two steps. This can cause MSR[EE] to be incorrectly enabled when the OPAL call returns. Fortunately that should only result in another masked interrupt being taken to disable MSR[EE] again, but it's a bit sloppy. The existing code also saves MSR to PACA, which is not re-entrant if there is a nested OPAL call from different MSR contexts, which can happen these days with SRESET interrupts on bare metal. To fix these issues, move the tracing and IRQ handling code to C, and call into asm just for the low level call when everything is ready to go. Save the MSR on stack rather than PACA. Performance cost is kept to a minimum with a few optimisations: - The endian switch upon return is combined with the MSR restore, which avoids an expensive context synchronizing operation for LE kernels. This makes up for the additional mtmsrd to enable interrupts with local_irq_enable(). - blr is now used to return from the opal_* functions that are called as C functions, to avoid link stack corruption. This requires a skiboot fix as well to keep the call stack balanced. A NULL call is more costly after this, (410ns->430ns on POWER9), but OPAL calls are generally not performance critical at this scale. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-26powerpc/64s: Fix data interrupts vs d-side MCE reentrancyNicholas Piggin
Handlers for interrupts that set DAR / DSISR, set MSR[RI] before those SPRs are read. If a d-side machine check hits in this window, DAR / DSISR will be clobbered silently, leading to random corruption. Fix this by having handlers save those registers before setting MSR[RI]. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-26powerpc/64s: Prepare to handle data interrupts vs d-side MCE reentrancyNicholas Piggin
A subsequent fix for data interrupts (those that set DAR / DSISR) requires some interrupt macros to be open-coded, and also requires the 0x300 interrupt handler to be moved out-of-line. This patch does that without changing behaviour, which makes the later fix a smaller change. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-26powerpc/64s: system reset interrupt preserve HSRRsNicholas Piggin
Code that uses HSRR registers is not required to clear MSR[RI] by convention, however the system reset NMI itself may use HSRR registers (e.g., to call OPAL) and clobber them. Rather than introduce the requirement to clear RI in order to use HSRRs, have system reset interrupt save and restore HSRRs. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-26powerpc/64s: Fix HV NMI vs HV interrupt recoverability testNicholas Piggin
HV interrupts that use HSRR registers do not enter with MSR[RI] clear, but their entry code is not recoverable vs NMI, due to shared use of HSPRG1 as a scratch register to save r13. This means that a system reset or machine check that hits in HSRR interrupt entry can cause r13 to be silently corrupted. Fix this by marking NMIs non-recoverable if they land in HV interrupt ranges. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-26ARM: 8849/1: NOMMU: Fix encodings for PMSAv8's PRBAR4/PRLAR4Vladimir Murzin
To access PRBARn, where n is referenced as a binary number: MRC p15, 0, <Rt>, c6, c8+n[3:1], 4*n[0] ; Read PRBARn into Rt MCR p15, 0, <Rt>, c6, c8+n[3:1], 4*n[0] ; Write Rt into PRBARn To access PRLARn, where n is referenced as a binary number: MRC p15, 0, <Rt>, c6, c8+n[3:1], 4*n[0]+1 ; Read PRLARn into Rt MCR p15, 0, <Rt>, c6, c8+n[3:1], 4*n[0]+1 ; Write Rt into PRLARn For PR{B,L}AR4, n is 4, n[0] is 0, n[3:1] is 2, while current encoding done with n[0] set to 1 which is wrong. Use proper encoding instead. Fixes: 046835b4aa22b9ab6aa0bb274e3b71047c4b887d ("ARM: 8757/1: NOMMU: Support PMSAv8 MPU") Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8848/1: virt: Align GIC version check with arm64 counterpartVladimir Murzin
arm64 has got relaxation on GIC version check at early boot stage due to update of the GIC architecture let's align ARM with that. To help backports (even though the code was correct at the time of writing) Fixes: e59941b9b381 ("ARM: 8527/1: virt: enable GICv3 system registers") Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8847/1: pm: fix HYP/SVC mode mismatch when MCPM is usedMarek Szyprowski
MCPM does a soft reset of the CPUs and uses common cpu_resume() routine to perform low-level platform initialization. This results in a try to install HYP stubs for the second time for each CPU and results in false HYP/SVC mode mismatch detection. The HYP stubs are already installed at the beginning of the kernel initialization on the boot CPU (head.S) or in the secondary_startup() for other CPUs. To fix this issue MCPM code should use a cpu_resume() routine without HYP stubs installation. This change fixes HYP/SVC mode mismatch on Samsung Exynos5422-based Odroid XU3/XU4/HC1 boards. Fixes: 3721924c8154 ("ARM: 8081/1: MCPM: provide infrastructure to allow for MCPM loopback") Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Anand Moon <linux.amoon@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8845/1: use unified assembler in c filesStefan Agner
Use unified assembler syntax (UAL) in inline assembler. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler. When compiling non-Thumb2 GCC always emits a ".syntax divided" at the beginning of the inline assembly which makes the assembler fail. Since GCC 5 there is the -masm-syntax-unified GCC option which make GCC assume unified syntax asm and hence emits ".syntax unified" even in ARM mode. However, the option is broken since GCC version 6 (see GCC PR88648 [1]). Work around by adding ".syntax unified" as part of the inline assembly. [0] https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html#index-masm-syntax-unified [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88648 Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8844/1: use unified assembler in assembly filesStefan Agner
Use unified assembler syntax (UAL) in assembly files. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler. Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8843/1: use unified assembler in headersStefan Agner
Use unified assembler syntax (UAL) in headers. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler. Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8841/1: use unified assembler in macrosStefan Agner
Use unified assembler syntax (UAL) in macros. Divided syntax is considered deprecated. This will also allow to build the kernel using LLVM's integrated assembler. Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8840/1: use a raw_spinlock_t in unwindSebastian Andrzej Siewior
Mostly unwind is done with irqs enabled however SLUB may call it with irqs disabled while creating a new SLUB cache. I had system freeze while loading a module which called kmem_cache_create() on init. That means SLUB's __slab_alloc() disabled interrupts and then ->new_slab_objects() ->new_slab() ->setup_object() ->setup_object_debug() ->init_tracking() ->set_track() ->save_stack_trace() ->save_stack_trace_tsk() ->walk_stackframe() ->unwind_frame() ->unwind_find_idx() =>spin_lock_irqsave(&unwind_lock); Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26ARM: 8839/1: kprobe: make patch_lock a raw_spinlock_tYang Shi
When running kprobe on -rt kernel, the below bug is caught: |BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:931 |in_atomic(): 1, irqs_disabled(): 128, pid: 14, name: migration/0 |Preemption disabled at:[<802f2b98>] cpu_stopper_thread+0xc0/0x140 |CPU: 0 PID: 14 Comm: migration/0 Tainted: G O 4.8.3-rt2 #1 |Hardware name: Freescale LS1021A |[<8025a43c>] (___might_sleep) |[<80b5b324>] (rt_spin_lock) |[<80b5c31c>] (__patch_text_real) |[<80b5c3ac>] (patch_text_stop_machine) |[<802f2920>] (multi_cpu_stop) Since patch_text_stop_machine() is called in stop_machine() which disables IRQ, sleepable lock should be not used in this atomic context, so replace patch_lock to raw lock. Signed-off-by: Yang Shi <yang.shi@linaro.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26powerpc/mm/hash: Handle mmap_min_addr correctly in get_unmapped_area topdown ↵Aneesh Kumar K.V
search When doing top-down search the low_limit is not PAGE_SIZE but rather max(PAGE_SIZE, mmap_min_addr). This handle cases in which mmap_min_addr > PAGE_SIZE. Fixes: fba2369e6ceb ("mm: use vm_unmapped_area() on powerpc architecture") Reviewed-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-26powerpc/hugetlb: Handle mmap_min_addr correctly in get_unmapped_area callbackAneesh Kumar K.V
After we ALIGN up the address we need to make sure we didn't overflow and resulted in zero address. In that case, we need to make sure that the returned address is greater than mmap_min_addr. This fixes selftest va_128TBswitch --run-hugetlb reporting failures when run as non root user for mmap(-1, MAP_HUGETLB) The bug is that a non-root user requesting address -1 will be given address 0 which will then fail, whereas they should have been given something else that would have succeeded. We also avoid the first mmap(-1, MAP_HUGETLB) returning NULL address as mmap address with this change. So we think this is not a security issue, because it only affects whether we choose an address below mmap_min_addr, not whether we actually allow that address to be mapped. ie. there are existing capability checks to prevent a user mapping below mmap_min_addr and those will still be honoured even without this fix. Fixes: 484837601d4d ("powerpc/mm: Add radix support for hugetlb") Reviewed-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-02-25ARC: boot log: cut down on verbosityVineet Gupta
The syscall ABI has long been fixed, so no need to call that out now. Also, there's no need to print really fine details such as norm, barrel-shifter etc. Those are given in a Linux enabled hardware config. So now we print just 1 line for all optional "instruction" related hardware features | | ISA Extn : atomic ll64 unalign mpy[opt 9] div_rem vs. 2 before | |ISA Extn : atomic ll64 unalign | : mpy[opt 9] div_rem norm barrel-shift swap minmax swape Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-02-25ARCv2: boot log: refurbish HS core/release identificationVineet Gupta
HS core names and releases have so far been identified based solely on IDENTIFY.ARCVER field. With the future HS releases this will not be sufficient as same ARCVER 0x54 could be an HS38 or HS48. So rewrite the code to use a new BCR to identify the cores properly. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-02-25x86/mce: Improve error message when kernel cannot recover, p2Tony Luck
In c7d606f560e4 ("x86/mce: Improve error message when kernel cannot recover") a case was added for a machine check caused by a DATA access to poison memory from the kernel. A case should have been added also for an uncorrectable error during an instruction fetch in the kernel. Add that extra case so the error message now reads: mce: [Hardware Error]: Machine check: Instruction fetch error in kernel Fixes: c7d606f560e4 ("x86/mce: Improve error message when kernel cannot recover") Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Pu Wen <puwen@hygon.cn> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190225205940.15226-1-tony.luck@intel.com
2019-02-25MIPS: lantiq: Remove separate GPHY Firmware loaderHauke Mehrtens
The separate GPHY Firmware loader driver is not used any more, the GPHY firmware is now loaded by the GSWIP switch driver which also makes use of the GPHY. Remove the old unused GPHY firmware loader driver. The GPHY firmware is useless without an Ethernet and switch driver, it should not harm if loading this does not work for system using an old device tree. I am not aware of any vendor separating the device tree from the kernel binary, it should be ok to remove this. The code and the functionality form this separate GPHY firmware loader was added to the gswip driver in commit 14fceff4771e ("net: dsa: Add Lantiq / Intel DSA driver for vrx200") Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: devicetree@vger.kernel.org Cc: john@phrozen.org Cc: netdev@vger.kernel.org
2019-02-25x86/uaccess: Remove unused __addr_ok() macroBorislav Petkov
This was caught while staring at the whole {set,get}_fs() machinery. It's last user, the 32-bit version of strnlen_user() went away with 5723aa993d83 ("x86: use the new generic strnlen_user() function") so drop it. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: the arch/x86 maintainers <x86@kernel.org> Cc: "Tobin C. Harding" <tobin@kernel.org> Link: https://lkml.kernel.org/r/20190225191109.7671-1-bp@alien8.de
2019-02-25MIPS: BCM63XX: provide DMA masks for ethernet devicesJonas Gorski
The switch to the generic dma ops made dma masks mandatory, breaking devices having them not set. In case of bcm63xx, it broke ethernet with the following warning when trying to up the device: [ 2.633123] ------------[ cut here ]------------ [ 2.637949] WARNING: CPU: 0 PID: 325 at ./include/linux/dma-mapping.h:516 bcm_enetsw_open+0x160/0xbbc [ 2.647423] Modules linked in: gpio_button_hotplug [ 2.652361] CPU: 0 PID: 325 Comm: ip Not tainted 4.19.16 #0 [ 2.658080] Stack : 80520000 804cd3ec 00000000 00000000 804ccc00 87085bdc 87d3f9d4 804f9a17 [ 2.666707] 8049cf18 00000145 80a942a0 00000204 80ac0000 10008400 87085b90 eb3d5ab7 [ 2.675325] 00000000 00000000 80ac0000 000022b0 00000000 00000000 00000007 00000000 [ 2.683954] 0000007a 80500000 0013b381 00000000 80000000 00000000 804a1664 80289878 [ 2.692572] 00000009 00000204 80ac0000 00000200 00000002 00000000 00000000 80a90000 [ 2.701191] ... [ 2.703701] Call Trace: [ 2.706244] [<8001f3c8>] show_stack+0x58/0x100 [ 2.710840] [<800336e4>] __warn+0xe4/0x118 [ 2.715049] [<800337d4>] warn_slowpath_null+0x48/0x64 [ 2.720237] [<80289878>] bcm_enetsw_open+0x160/0xbbc [ 2.725347] [<802d1d4c>] __dev_open+0xf8/0x16c [ 2.729913] [<802d20cc>] __dev_change_flags+0x100/0x1c4 [ 2.735290] [<802d21b8>] dev_change_flags+0x28/0x70 [ 2.740326] [<803539e0>] devinet_ioctl+0x310/0x7b0 [ 2.745250] [<80355fd8>] inet_ioctl+0x1f8/0x224 [ 2.749939] [<802af290>] sock_ioctl+0x30c/0x488 [ 2.754632] [<80112b34>] do_vfs_ioctl+0x740/0x7dc [ 2.759459] [<80112c20>] ksys_ioctl+0x50/0x94 [ 2.763955] [<800240b8>] syscall_common+0x34/0x58 [ 2.768782] ---[ end trace fb1a6b14d74e28b6 ]--- [ 2.773544] bcm63xx_enetsw bcm63xx_enetsw.0: cannot allocate rx ring 512 Fix this by adding appropriate DMA masks for the platform devices. Fixes: f8c55dc6e828 ("MIPS: use generic dma noncoherent ops for simple noncoherent platforms") Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: stable@vger.kernel.org # v4.19+
2019-02-25arc: hsdk_defconfig: Enable CONFIG_BLK_DEV_RAMCorentin Labbe
We have now a HSDK device in our kernelci lab, but kernel builded via the hsdk_defconfig lacks ramfs supports, so it cannot boot kernelci jobs yet. So this patch enable CONFIG_BLK_DEV_RAM in hsdk_defconfig. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Acked-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-02-25ARC: u-boot args: check that magic number is correctEugeniy Paltsev
In case of devboards we really often disable bootloader and load Linux image in memory via JTAG. Even if kernel tries to verify uboot_tag and uboot_arg there is sill a chance that we treat some garbage in registers as valid u-boot arguments in JTAG case. E.g. it is enough to have '1' in r0 to treat any value in r2 as a boot command line. So check that magic number passed from u-boot is correct and drop u-boot arguments otherwise. That helps to reduce the possibility of using garbage as u-boot arguments in JTAG case. We can safely check U-boot magic value (0x0) in linux passed via r1 register as U-boot pass it from the beginning. So there is no backward-compatibility issues. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-02-25ARC: perf: bpok condition only exists for ARCompactVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-02-25ARCv2: Add explcit unaligned access support (and ability to disable too)Eugeniy Paltsev
As of today we enable unaligned access unconditionally on ARCv2. Do this under a Kconfig option to allow disable it for test, benchmarking etc. Also while at it - Select HAVE_EFFICIENT_UNALIGNED_ACCESS - Although gcc defaults to unaligned access (since GNU 2018.03), add the right toggles for enabling or disabling as appropriate - update bootlog to prints both HW feature status (exists, enabled/disabled) and SW status (used / not used). - wire up the relaxed memcpy for unaligned access Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> [vgupta: squashed patches, handle gcc -mno-unaligned-access quick]
2019-02-25riscv: Use latest system call ABIArnd Bergmann
We don't yet have an upstream glibc port for riscv, so there is no user space for the existing ABI, and we can remove the definitions for 32-bit time_t, off_t and struct resource and system calls based on them, including the vdso. Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2019-02-25x86/uaccess: Don't leak the AC flag into __put_user() value evaluationAndy Lutomirski
When calling __put_user(foo(), ptr), the __put_user() macro would call foo() in between __uaccess_begin() and __uaccess_end(). If that code were buggy, then those bugs would be run without SMAP protection. Fortunately, there seem to be few instances of the problem in the kernel. Nevertheless, __put_user() should be fixed to avoid doing this. Therefore, evaluate __put_user()'s argument before setting AC. This issue was noticed when an objtool hack by Peter Zijlstra complained about genregs_get() and I compared the assembly output to the C source. [ bp: Massage commit message and fixed up whitespace. ] Fixes: 11f1a4b9755f ("x86: reorganize SMAP handling in user space accesses") Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20190225125231.845656645@infradead.org