summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2018-10-04x86/paravirt: Work around GCC inlining bugs when compiling paravirt opsNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it isn't, but that's the best we can get.) In this patch we wrap the paravirt call section tricks in a macro, to hide it from GCC. The effect of the patch is a more aggressive inlining, which also causes a size increase of kernel. text data bss dec hex filename 18147336 10226688 2957312 31331336 1de1408 ./vmlinux before 18162555 10226288 2957312 31346155 1de4deb ./vmlinux after (+14819) The number of static text symbols (non-inlined functions) goes down: Before: 40053 After: 39942 (-111) [ mingo: Rewrote the changelog. ] Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Reviewed-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alok Kataria <akataria@vmware.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20181003213100.189959-8-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04x86/bug: Macrofy the BUG table section handling, to work around GCC inlining ↵Nadav Amit
bugs As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it isn't, but that's the best we can get.) This patch increases the kernel size: text data bss dec hex filename 18146889 10225380 2957312 31329581 1de0d2d ./vmlinux before 18147336 10226688 2957312 31331336 1de1408 ./vmlinux after (+1755) But enables more aggressive inlining (and probably better branch decisions). The number of static text symbols in vmlinux is much lower: Before: 40218 After: 40053 (-165) The assembly code gets harder to read due to the extra macro layer. [ mingo: Rewrote the changelog. ] Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181003213100.189959-7-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04x86/alternatives: Macrofy lock prefixes to work around GCC inlining bugsNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block - i.e. to macrify the affected block. As a result GCC considers the inline assembly block as a single instruction. This patch handles the LOCK prefix, allowing more aggresive inlining: text data bss dec hex filename 18140140 10225284 2957312 31322736 1ddf270 ./vmlinux before 18146889 10225380 2957312 31329581 1de0d2d ./vmlinux after (+6845) This is the reduction in non-inlined functions: Before: 40286 After: 40218 (-68) Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181003213100.189959-6-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04x86/refcount: Work around GCC inlining bugNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it isn't, but that's the best we can get.) This patch allows GCC to inline simple functions such as __get_seccomp_filter(). To no-one's surprise the result is that GCC performs more aggressive (read: correct) inlining decisions in these senarios, which reduces the kernel size and presumably also speeds it up: text data bss dec hex filename 18140970 10225412 2957312 31323694 1ddf62e ./vmlinux before 18140140 10225284 2957312 31322736 1ddf270 ./vmlinux after (-958) 16 fewer static text symbols: Before: 40302 After: 40286 (-16) these got inlined instead. Functions such as kref_get(), free_user(), fuse_file_get() now get inlined. Hurray! [ mingo: Rewrote the changelog. ] Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jan Beulich <JBeulich@suse.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181003213100.189959-5-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04x86/objtool: Use asm macros to work around GCC inlining bugsNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. In the case of objtool the resulting borkage can be significant, since all the annotations of objtool are discarded during linkage and never inlined, yet GCC bogusly considers most functions affected by objtool annotations as 'too large'. The workaround is to set an assembly macro and call it from the inline assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it isn't, but that's the best we can get.) This increases the kernel size slightly: text data bss dec hex filename 18140829 10224724 2957312 31322865 1ddf2f1 ./vmlinux before 18140970 10225412 2957312 31323694 1ddf62e ./vmlinux after (+829) The number of static text symbols (i.e. non-inlined functions) is reduced: Before: 40321 After: 40302 (-19) [ mingo: Rewrote the changelog. ] Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Christopher Li <sparse@chrisli.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-sparse@vger.kernel.org Link: http://lkml.kernel.org/r/20181003213100.189959-4-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04kbuild/Makefile: Prepare for using macros in inline assembly code to work ↵Nadav Amit
around asm() related GCC inlining bugs Using macros in inline assembly allows us to work around bugs in GCC's inlining decisions. Compile macros.S and use it to assemble all C files. Currently only x86 will use it. Background: The inlining pass of GCC doesn't include an assembler, so it's not aware of basic properties of the generated code, such as its size in bytes, or that there are such things as discontiuous blocks of code and data due to the newfangled linker feature called 'sections' ... Instead GCC uses a lazy and fragile heuristic: it does a linear count of certain syntactic and whitespace elements in inlined assembly block source code, such as a count of new-lines and semicolons (!), as a poor substitute for "code size and complexity". Unsurprisingly this heuristic falls over and breaks its neck whith certain common types of kernel code that use inline assembly, such as the frequent practice of putting useful information into alternative sections. As a result of this fresh, 20+ years old GCC bug, GCC's inlining decisions are effectively disabled for inlined functions that make use of such asm() blocks, because GCC thinks those sections of code are "large" - when in reality they are often result in just a very low number of machine instructions. This absolute lack of inlining provess when GCC comes across such asm() blocks both increases generated kernel code size and causes performance overhead, which is particularly noticeable on paravirt kernels, which make frequent use of these inlining facilities in attempt to stay out of the way when running on baremetal hardware. Instead of fixing the compiler we use a workaround: we set an assembly macro and call it from the inlined assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it often isn't but I digress.) This uglifies and bloats the source code - for example just the refcount related changes have this impact: Makefile | 9 +++++++-- arch/x86/Makefile | 7 +++++++ arch/x86/kernel/macros.S | 7 +++++++ scripts/Kbuild.include | 4 +++- scripts/mod/Makefile | 2 ++ 5 files changed, 26 insertions(+), 3 deletions(-) Yay readability and maintainability, it's not like assembly code is hard to read and maintain ... We also hope that GCC will eventually get fixed, but we are not holding our breath for that. Yet we are optimistic, it might still happen, any decade now. [ mingo: Wrote new changelog describing the background. ] Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michal Marek <michal.lkml@markovi.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kbuild@vger.kernel.org Link: http://lkml.kernel.org/r/20181003213100.189959-3-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04kbuild/arch/xtensa: Define LINKER_SCRIPT for the linker scriptNadav Amit
Define the LINKER_SCRIPT when building the linker script as being done in other architectures. This is required, because upcoming Makefile changes would otherwise break things. Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Zankel <chris@zankel.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michal Marek <michal.lkml@markovi.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-xtensa@linux-xtensa.org Link: http://lkml.kernel.org/r/20181003213100.189959-2-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04powerpc/powernv/npu: Remove atsd_threshold debugfs settingMark Hairgrove
This threshold is no longer used now that all invalidates issue a single ATSD to each active NPU. Signed-off-by: Mark Hairgrove <mhairgrove@nvidia.com> Reviewed-by: Alistair Popple <alistair@popple.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-04powerpc/powernv/npu: Use size-based ATSD invalidatesMark Hairgrove
Prior to this change only two types of ATSDs were issued to the NPU: invalidates targeting a single page and invalidates targeting the whole address space. The crossover point happened at the configurable atsd_threshold which defaulted to 2M. Invalidates that size or smaller would issue per-page invalidates for the whole range. The NPU supports more invalidation sizes however: 64K, 2M, 1G, and all. These invalidates target addresses aligned to their size. 2M is a common invalidation size for GPU-enabled applications because that is a GPU page size, so reducing the number of invalidates by 32x in that case is a clear improvement. ATSD latency is high in general so now we always issue a single invalidate rather than multiple. This will over-invalidate in some cases, but for any invalidation size over 2M it matches or improves the prior behavior. There's also an improvement for single-page invalidates since the prior version issued two invalidates for that case instead of one. With this change all issued ATSDs now perform a flush, so the flush parameter has been removed from all the helpers. To show the benefit here are some performance numbers from a microbenchmark which creates a 1G allocation then uses mprotect with PROT_NONE to trigger invalidates in strides across the allocation. One NPU (1 GPU): mprotect rate (GB/s) Stride Before After Speedup 64K 5.3 5.6 5% 1M 39.3 57.4 46% 2M 49.7 82.6 66% 4M 286.6 285.7 0% Two NPUs (6 GPUs): mprotect rate (GB/s) Stride Before After Speedup 64K 6.5 7.4 13% 1M 33.4 67.9 103% 2M 38.7 93.1 141% 4M 356.7 354.6 -1% Anything over 2M is roughly the same as before since both cases issue a single ATSD. Signed-off-by: Mark Hairgrove <mhairgrove@nvidia.com> Reviewed-By: Alistair Popple <alistair@popple.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-04powerpc/powernv/npu: Reduce eieio usage when issuing ATSD invalidatesMark Hairgrove
There are two types of ATSDs issued to the NPU: invalidates targeting a specific virtual address and invalidates targeting the whole address space. In both cases prior to this change, the sequence was: for each NPU - Write the target address to the XTS_ATSD_AVA register - EIEIO - Write the launch value to issue the ATSD First, a target address is not required when invalidating the whole address space, so that write and the EIEIO have been removed. The AP (size) field in the launch is not needed either. Second, for per-address invalidates the above sequence is inefficient in the common case of multiple NPUs because an EIEIO is issued per NPU. This unnecessarily forces the launches of later ATSDs to be ordered with the launches of earlier ones. The new sequence only issues a single EIEIO: for each NPU - Write the target address to the XTS_ATSD_AVA register EIEIO for each NPU - Write the launch value to issue the ATSD Performance results were gathered using a microbenchmark which creates a 1G allocation then uses mprotect with PROT_NONE to trigger invalidates in strides across the allocation. With only a single NPU active (one GPU) the difference is in the noise for both types of invalidates (+/-1%). With two NPUs active (on a 6-GPU system) the effect is more noticeable: mprotect rate (GB/s) Stride Before After Speedup 64K 5.9 6.5 10% 1M 31.2 33.4 7% 2M 36.3 38.7 7% 4M 322.6 356.7 11% Signed-off-by: Mark Hairgrove <mhairgrove@nvidia.com> Reviewed-by: Alistair Popple <alistair@popple.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-04powerpc: remove leftover code of old GCC version checksMasahiro Yamada
Clean up the leftover of commit f2910f0e6835 ("powerpc: remove old GCC version checks"). Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-04powerpc/nohash: fix undefined behaviour when testing page size supportDaniel Axtens
When enumerating page size definitions to check hardware support, we construct a constant which is (1U << (def->shift - 10)). However, the array of page size definitions is only initalised for various MMU_PAGE_* constants, so it contains a number of 0-initialised elements with def->shift == 0. This means we end up shifting by a very large number, which gives the following UBSan splat: ================================================================================ UBSAN: Undefined behaviour in /home/dja/dev/linux/linux/arch/powerpc/mm/tlb_nohash.c:506:21 shift exponent 4294967286 is too large for 32-bit type 'unsigned int' CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.0-rc3-00045-ga604f927b012-dirty #6 Call Trace: [c00000000101bc20] [c000000000a13d54] .dump_stack+0xa8/0xec (unreliable) [c00000000101bcb0] [c0000000004f20a8] .ubsan_epilogue+0x18/0x64 [c00000000101bd30] [c0000000004f2b10] .__ubsan_handle_shift_out_of_bounds+0x110/0x1a4 [c00000000101be20] [c000000000d21760] .early_init_mmu+0x1b4/0x5a0 [c00000000101bf10] [c000000000d1ba28] .early_setup+0x100/0x130 [c00000000101bf90] [c000000000000528] start_here_multiplatform+0x68/0x80 ================================================================================ Fix this by first checking if the element exists (shift != 0) before constructing the constant. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-10-04Merge branch 'linus' into x86/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04x86/vdso: Fix vDSO syscall fallback asm constraint regressionAndy Lutomirski
When I added the missing memory outputs, I failed to update the index of the first argument (ebx) on 32-bit builds, which broke the fallbacks. Somehow I must have screwed up my testing or gotten lucky. Add another test to cover gettimeofday() as well. Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: 715bd9d12f84 ("x86/vdso: Fix asm constraints on vDSO syscall fallbacks") Link: http://lkml.kernel.org/r/21bd45ab04b6d838278fa5bebfa9163eceffa13c.1538608971.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04KVM: PPC: Book3S HV: Avoid crash from THP collapse during radix page faultPaul Mackerras
Commit 71d29f43b633 ("KVM: PPC: Book3S HV: Don't use compound_order to determine host mapping size", 2018-09-11) added a call to __find_linux_pte() and a dereference of the returned PTE pointer to the radix page fault path in the common case where the page is normal system memory. Previously, __find_linux_pte() was only called for mappings to physical addresses which don't have a page struct (e.g. memory-mapped I/O) or where the page struct is marked as reserved memory. This exposes us to the possibility that the returned PTE pointer could be NULL, for example in the case of a concurrent THP collapse operation. Dereferencing the returned NULL pointer causes a host crash. To fix this, we check for NULL, and if it is NULL, we retry the operation by returning to the guest, with the expectation that it will generate the same page fault again (unless of course it has been fixed up by another CPU in the meantime). Fixes: 71d29f43b633 ("KVM: PPC: Book3S HV: Don't use compound_order to determine host mapping size") Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2018-10-03Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Minor conflict in net/core/rtnetlink.c, David Ahern's bug fix in 'net' overlapped the renaming of a netlink attribute in net-next. Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-04arm64: dts: uniphier: Add USB2 PHY nodesKunihiko Hayashi
Add nodes of USB2 physical layer for UniPhier SoC. This supports LD11. Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-10-04arm64: dts: uniphier: Add USB3 controller nodesKunihiko Hayashi
Add USB3 controller nodes including usb-core, resets, regulator, ss-phy and hs-phy. This supports for LD20, PXs3 and the boards. This includes additional efuse nodes for obtaining PHY trimming values. Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-10-04ARM: dts: uniphier: Add USB2 PHY nodesKunihiko Hayashi
Add nodes of USB2 physical layer for UniPhier SoC. This supports Pro4. Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-10-04ARM: dts: uniphier: Add USB3 controller nodesKunihiko Hayashi
Add USB3 controller nodes including usb-core, resets, regulator, ss-phy and hs-phy. This supports for Pro4, PXs2 and the boards. Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-10-03x86/intel_rdt: Show missing resctrl mount optionsXiaochen Shen
In resctrl filesystem, mount options exist to enable L3/L2 CDP and MBA Software Controller features if the platform supports them: mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps]] /sys/fs/resctrl But currently only "cdp" option is displayed in /proc/mounts. "cdpl2" and "mba_MBps" options are not shown even when they are active. Before: # mount -t resctrl resctrl -o cdp,mba_MBps /sys/fs/resctrl # grep resctrl /proc/mounts /sys/fs/resctrl /sys/fs/resctrl resctrl rw,relatime,cdp 0 0 After: # mount -t resctrl resctrl -o cdp,mba_MBps /sys/fs/resctrl # grep resctrl /proc/mounts /sys/fs/resctrl /sys/fs/resctrl resctrl rw,relatime,cdp,mba_MBps 0 0 Signed-off-by: Xiaochen Shen <xiaochen.shen@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: "H Peter Anvin" <hpa@zytor.com> Cc: "Tony Luck" <tony.luck@intel.com> Link: https://lkml.kernel.org/r/1536796118-60135-1-git-send-email-fenghua.yu@intel.com
2018-10-03x86/intel_rdt: Switch to bitmap_zalloc()Andy Shevchenko
Switch to bitmap_zalloc() to show clearly what is allocated. Besides that it returns a pointer of bitmap type instead of opaque void *. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Fenghua Yu <fenghua.yu@intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/20180830115039.63430-1-andriy.shevchenko@linux.intel.com
2018-10-03x86/intel_rdt: Re-enable pseudo-lock measurementsReinette Chatre
Commit 4a7a54a55e72 ("x86/intel_rdt: Disable PMU access") disabled measurements of pseudo-locked regions because of incorrect usage of the performance monitoring hardware. Cache pseudo-locking measurements are now done correctly with the in-kernel perf API and its use can be re-enabled at this time. The adjustment to the in-kernel perf API also separated the L2 and L3 measurements that can be triggered separately from user space. The re-enabling of the measurements is thus not a simple revert of the original disable in order to accommodate the additional parameter possible. Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: fenghua.yu@intel.com Cc: tony.luck@intel.com Cc: peterz@infradead.org Cc: acme@kernel.org Cc: gavin.hindman@intel.com Cc: jithu.joseph@intel.com Cc: dave.hansen@intel.com Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/bfb9fc31692e0c62d9ca39062e55eceb6a0635b5.1537377064.git.reinette.chatre@intel.com
2018-10-03arm64: dts: meson-axg: s400: disable emmcJerome Brunet
While it is possible to rework the s400 board to solder an eMMC on it, it is not the default option and most boards are fitted with a NAND instead. Let's disable the emmc device by default to reflect this. The board equipped with an eMMC will just have to alter the DT in the bootloader, like we do for the reserved memory regions. Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2018-10-03arm64: dts: meson-axg: s400: add missing emmc pwrseqJerome Brunet
eMMC pwrseq is defined in the s400 dts but not used in the emmc node. This is probably just a copy/paste error Fixes: 221cf34bac54 ("ARM64: dts: meson-axg: enable the eMMC controller") Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2018-10-03arm64: dts: clearfog-gt-8k: add PCIe slot descriptionBaruch Siach
This adds support for the PCIe interface on the CON4 mini-PCIe connector. Signed-off-by: Baruch Siach <baruch@tkos.co.il> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2018-10-03arm64: xen: Use existing helper to check interrupt statusJulien Thierry
The status of interrupts might depend on more than just pstate. Use interrupts_disabled() instead of raw_irqs_disabled_flags() to take the full context into account. Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-10-03arm64: Use daifflag_restore after bp_hardeningJulien Thierry
For EL0 entries requiring bp_hardening, daif status is kept at DAIF_PROCCTX_NOIRQ until after hardening has been done. Then interrupts are enabled through local_irq_enable(). Before using local_irq_* functions, daifflags should be properly restored to a state where IRQs are enabled. Enable IRQs by restoring DAIF_PROCCTX state after bp hardening. Acked-by: James Morse <james.morse@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-10-03arm64: daifflags: Use irqflags functions for daifflagsJulien Thierry
Some of the work done in daifflags save/restore is already provided by irqflags functions. Daifflags should always be a superset of irqflags (it handles irq status + status of other flags). Modifying behaviour of irqflags should alter the behaviour of daifflags. Use irqflags_save/restore functions for the corresponding daifflags operation. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-10-03signal: Distinguish between kernel_siginfo and siginfoEric W. Biederman
Linus recently observed that if we did not worry about the padding member in struct siginfo it is only about 48 bytes, and 48 bytes is much nicer than 128 bytes for allocating on the stack and copying around in the kernel. The obvious thing of only adding the padding when userspace is including siginfo.h won't work as there are sigframe definitions in the kernel that embed struct siginfo. So split siginfo in two; kernel_siginfo and siginfo. Keeping the traditional name for the userspace definition. While the version that is used internally to the kernel and ultimately will not be padded to 128 bytes is called kernel_siginfo. The definition of struct kernel_siginfo I have put in include/signal_types.h A set of buildtime checks has been added to verify the two structures have the same field offsets. To make it easy to verify the change kernel_siginfo retains the same size as siginfo. The reduction in size comes in a following change. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-10-03signal: Remove the need for __ARCH_SI_PREABLE_SIZE and SI_PAD_SIZEEric W. Biederman
Rework the defintion of struct siginfo so that the array padding struct siginfo to SI_MAX_SIZE can be placed in a union along side of the rest of the struct siginfo members. The result is that we no longer need the __ARCH_SI_PREAMBLE_SIZE or SI_PAD_SIZE definitions. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-10-03signal/sparc: Move EMT_TAGOVF into the generic siginfo.hEric W. Biederman
When moving all of the architectures specific si_codes into siginfo.h, I apparently overlooked EMT_TAGOVF. Move it now. Remove the now redundant test in siginfo_layout for SIGEMT as now NSIGEMT is always defined. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-10-03x86/mm: Fix typo in commentTakuya Yamamoto
Signed-off-by: Takuya Yamamoto <tkyymmt01@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20180829072730.988-1-tkyymmt01@gmail.com
2018-10-03arm64: KVM: Remove some extra semicolon in kvm_target_cpuzhong jiang
There are some extra semicolon in kvm_target_cpu, remove it. Signed-off-by: zhong jiang <zhongjiang@huawei.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-10-03KVM: arm/arm64: Rename kvm_arm_config_vm to kvm_arm_setup_stage2Marc Zyngier
VM tends to be a very overloaded term in KVM, so let's keep it to describe the virtual machine. For the virtual memory setup, let's use the "stage2" suffix. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-10-03kvm: arm64: Allow tuning the physical address size for VMSuzuki K Poulose
Allow specifying the physical address size limit for a new VM via the kvm_type argument for the KVM_CREATE_VM ioctl. This allows us to finalise the stage2 page table as early as possible and hence perform the right checks on the memory slots without complication. The size is encoded as Log2(PA_Size) in bits[7:0] of the type field. For backward compatibility the value 0 is reserved and implies 40bits. Also, lift the limit of the IPA to host limit and allow lower IPA sizes (e.g, 32). The userspace could check the extension KVM_CAP_ARM_VM_IPA_SIZE for the availability of this feature. The cap check returns the maximum limit for the physical address shift supported by the host. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@kernel.org> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-10-03kvm: arm64: Limit the minimum number of page table levelsSuzuki K Poulose
Since we are about to remove the lower limit on the IPA size, make sure that we do not go to 1 level page table (e.g, with 32bit IPA on 64K host with concatenation) to avoid splitting the host PMD huge pages at stage2. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@kernel.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-10-03kvm: arm64: Set a limit on the IPA sizeSuzuki K Poulose
So far we have restricted the IPA size of the VM to the default value (40bits). Now that we can manage the IPA size per VM and support dynamic stage2 page tables, we can allow VMs to have larger IPA. This patch introduces a the maximum IPA size supported on the host. This is decided by the following factors : 1) Maximum PARange supported by the CPUs - This can be inferred from the system wide safe value. 2) Maximum PA size supported by the host kernel (48 vs 52) 3) Number of levels in the host page table (as we base our stage2 tables on the host table helpers). Since the stage2 page table code is dependent on the stage1 page table, we always ensure that : Number of Levels at Stage1 >= Number of Levels at Stage2 So we limit the IPA to make sure that the above condition is satisfied. This will affect the following combinations of VA_BITS and IPA for different page sizes. Host configuration | Unsupported IPA ranges 39bit VA, 4K | [44, 48] 36bit VA, 16K | [41, 48] 42bit VA, 64K | [47, 52] Supporting the above combinations need independent stage2 page table manipulation code, which would need substantial changes. We could purse the solution independently and switch the page table code once we have it ready. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@kernel.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-10-03arm64: arch_timer: avoid unused function warningArnd Bergmann
arm64_1188873_read_cntvct_el0() is protected by the correct CONFIG_ARM64_ERRATUM_1188873 #ifdef, but the only reference to it is also inside of an CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND section, and causes a warning if that is disabled: drivers/clocksource/arm_arch_timer.c:323:20: error: 'arm64_1188873_read_cntvct_el0' defined but not used [-Werror=unused-function] Since the erratum requires that we always apply the workaround in the timer driver, select that symbol as we do for SoC specific errata. Fixes: 95b861a4a6d9 ("arm64: arch_timer: Add workaround for ARM erratum 1188873") Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-10-03x86-32, hibernate: Adjust in_suspend after resumed on 32bit systemZhimin Gu
Update the in_suspend variable to reflect the actual hibernation status. Back-port from 64bit system. Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-10-03x86-32, hibernate: Set up temporary text mapping for 32bit systemZhimin Gu
Set up the temporary text mapping for the final jump address so that the system could jump to the right address after all the pages have been copied back to their original address - otherwise the final mapping for the jump address is invalid. Analogous changes were made for 64-bit in commit 65c0554b73c9 (x86/power/64: Fix kernel text mapping corruption during image restoration). Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-10-03x86-32, hibernate: Switch to relocated restore code during resume on 32bit ↵Zhimin Gu
system On 64bit system, code should be executed in a safe page during page restoring, as the page where instruction is running during resume might be scribbled and causes issues. Although on 32 bit, we only suspend resuming by same kernel that did the suspend, we'd like to remove that restriction in the future. Porting corresponding code from 64bit system: Allocate a safe page, and copy the restore code to it, then jump to the safe page to run the code. Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-10-03x86-32, hibernate: Switch to original page table after resumedZhimin Gu
After all the pages are restored to previous address, the page table switches back to current swapper_pg_dir. However the swapper_pg_dir currently in used might not be consistent with previous page table, which might cause issue after resume. Fix this issue by switching to original page table after resume, and the address of the original page table is saved in the hibernation image header. Move the manipulation of restore_cr3 into common code blocks. Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-10-03x86-32, hibernate: Use the page size macro instead of constant valueZhimin Gu
Convert the hard code into PAGE_SIZE for better scalability. No functional change. Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-10-03x86-32, hibernate: Use temp_pgt as the temporary page tableZhimin Gu
This is to reuse the temp_pgt for both 32bit and 64bit system. No intentional behavior change. Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-10-03x86, hibernate: Rename temp_level4_pgt to temp_pgtZhimin Gu
As 32bit system is not using 4-level page, rename it to temp_pgt so that it can be reused for both 32bit and 64bit hibernation. No functional change. Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-10-03x86-32, hibernate: Enable CONFIG_ARCH_HIBERNATION_HEADER on 32bit systemZhimin Gu
Enable CONFIG_ARCH_HIBERNATION_HEADER for 32bit system so that 1. arch_hibernation_header_save/restore() are invoked across hibernation on 32bit system. 2. The checksum handling as well as 'magic' number checking for 32bit system are enabled. Controlled by CONFIG_X86_64 in hibernate.c Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-10-03x86, hibernate: Extract the common code of 64/32 bit systemZhimin Gu
Reduce the hibernation code duplication between x86-32 and x86-64 by extracting the common code into hibernate.c. Currently only pfn_is_nosave() is the activated common function in hibernate.c No functional change. Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-10-03x86-32/asm/power: Create stack frames in hibernate_asm_32.SZhimin Gu
swsusp_arch_suspend() is callable non-leaf function which doesn't honor CONFIG_FRAME_POINTER, which can result in bad stack traces. Also it's not annotated as ELF callable function which can confuse tooling. Create a stack frame for it when CONFIG_FRAME_POINTER is enabled and give it proper ELF function annotation. Also in this patch introduces the restore_registers() symbol and gives it ELF function annotation, thus to prepare for later register restore. Analogous changes were made for 64bit before in commit ef0f3ed5a4ac (x86/asm/power: Create stack frames in hibernate_asm_64.S) and commit 4ce827b4cc58 (x86/power/64: Fix hibernation return address corruption). Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-10-03PM / hibernate: Check the success of generating md5 digest before hibernationChen Yu
Currently if get_e820_md5() fails, then it will hibernate nevertheless. Actually the error code should be propagated to upper caller so that the hibernation could be aware of the result and terminates the process if md5 digest fails. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>