summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-10-16Merge branch 'x86/build' into locking/core, to pick up dependent patches and ↵Ingo Molnar
unify jump-label work Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16locking/lockdep: Remove duplicated 'lock_class_ops' percpu arrayWaiman Long
Remove the duplicated 'lock_class_ops' percpu array that is not used anywhere. Signed-off-by: Waiman Long <longman@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Fixes: 8ca2b56cd7da ("locking/lockdep: Make class->ops a percpu counter and move it under CONFIG_DEBUG_LOCKDEP=y") Link: http://lkml.kernel.org/r/1539380547-16726-1-git-send-email-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-10x86/defconfig: Enable CONFIG_USB_XHCI_HCD=yAdam Borowski
A spanking new machine I just got has all but one USB ports wired as 3.0. Booting defconfig resulted in no keyboard or mouse, which was pretty uncool. Let's enable that -- USB3 is ubiquitous rather than an oddity. As 'y' not 'm' -- recovering from initrd problems needs a keyboard. Also add it to the 32-bit defconfig. Signed-off-by: Adam Borowski <kilobyte@angband.pl> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-usb@vger.kernel.org Link: http://lkml.kernel.org/r/20181009062803.4332-1-kilobyte@angband.pl Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-09futex: Replace spin_is_locked() with lockdepLance Roy
lockdep_assert_held() is better suited for checking locking requirements, since it won't get confused when the lock is held by some other task. This is also a step towards possibly removing spin_is_locked(). Signed-off-by: Lance Roy <ldr709@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: "Paul E. McKenney" <paulmck@linux.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Darren Hart <dvhart@infradead.org> Link: https://lkml.kernel.org/r/20181003053902.6910-12-ldr709@gmail.com
2018-10-09locking/lockdep: Make class->ops a percpu counter and move it under ↵Waiman Long
CONFIG_DEBUG_LOCKDEP=y A sizable portion of the CPU cycles spent on the __lock_acquire() is used up by the atomic increment of the class->ops stat counter. By taking it out from the lock_class structure and changing it to a per-cpu per-lock-class counter, we can reduce the amount of cacheline contention on the class structure when multiple CPUs are trying to acquire locks of the same class simultaneously. To limit the increase in memory consumption because of the percpu nature of that counter, it is now put back under the CONFIG_DEBUG_LOCKDEP config option. So the memory consumption increase will only occur if CONFIG_DEBUG_LOCKDEP is defined. The lock_class structure, however, is reduced in size by 16 bytes on 64-bit archs after ops removal and a minor restructuring of the fields. This patch also fixes a bug in the increment code as the counter is of the 'unsigned long' type, but atomic_inc() was used to increment it. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/d66681f3-8781-9793-1dcf-2436a284550b@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-06x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugsNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block - which is also a minor cleanup for the jump-label code. As a result the code size is slightly increased, but inlining decisions are better: text data bss dec hex filename 18163528 10226300 2957312 31347140 1de51c4 ./vmlinux before 18163608 10227348 2957312 31348268 1de562c ./vmlinux after (+1128) And functions such as intel_pstate_adjust_policy_max(), kvm_cpu_accept_dm_intr(), kvm_register_readl() are inlined. Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Philippe Ombredanne <pombredanne@nexb.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181005202718.229565-4-namit@vmware.com Link: https://lore.kernel.org/lkml/20181003213100.189959-11-namit@vmware.com/T/#u Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-06x86/cpufeature: Macrofy inline assembly code to work around GCC inlining bugsNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block - which is pretty pointless indirection in the static_cpu_has() case, but is worth it to improve overall inlining quality. The patch slightly increases the kernel size: text data bss dec hex filename 18162879 10226256 2957312 31346447 1de4f0f ./vmlinux before 18163528 10226300 2957312 31347140 1de51c4 ./vmlinux after (+693) And enables the inlining of function such as free_ldt_pgtables(). Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181005202718.229565-3-namit@vmware.com Link: https://lore.kernel.org/lkml/20181003213100.189959-10-namit@vmware.com/T/#u Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-06x86/extable: Macrofy inline assembly code to work around GCC inlining bugsNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block - which is also a minor cleanup for the exception table code. Text size goes up a bit: text data bss dec hex filename 18162555 10226288 2957312 31346155 1de4deb ./vmlinux before 18162879 10226256 2957312 31346447 1de4f0f ./vmlinux after (+292) But this allows the inlining of functions such as nested_vmx_exit_reflected(), set_segment_reg(), __copy_xstate_to_user() which is a net benefit. Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181005202718.229565-2-namit@vmware.com Link: https://lore.kernel.org/lkml/20181003213100.189959-9-namit@vmware.com/T/#u Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-06Merge branch 'core/core' into x86/build, to prevent conflictsIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-05Merge branch 'x86/core' into x86/build, to avoid conflictsIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04x86/paravirt: Work around GCC inlining bugs when compiling paravirt opsNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it isn't, but that's the best we can get.) In this patch we wrap the paravirt call section tricks in a macro, to hide it from GCC. The effect of the patch is a more aggressive inlining, which also causes a size increase of kernel. text data bss dec hex filename 18147336 10226688 2957312 31331336 1de1408 ./vmlinux before 18162555 10226288 2957312 31346155 1de4deb ./vmlinux after (+14819) The number of static text symbols (non-inlined functions) goes down: Before: 40053 After: 39942 (-111) [ mingo: Rewrote the changelog. ] Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Reviewed-by: Juergen Gross <jgross@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alok Kataria <akataria@vmware.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20181003213100.189959-8-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04x86/bug: Macrofy the BUG table section handling, to work around GCC inlining ↵Nadav Amit
bugs As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it isn't, but that's the best we can get.) This patch increases the kernel size: text data bss dec hex filename 18146889 10225380 2957312 31329581 1de0d2d ./vmlinux before 18147336 10226688 2957312 31331336 1de1408 ./vmlinux after (+1755) But enables more aggressive inlining (and probably better branch decisions). The number of static text symbols in vmlinux is much lower: Before: 40218 After: 40053 (-165) The assembly code gets harder to read due to the extra macro layer. [ mingo: Rewrote the changelog. ] Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181003213100.189959-7-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04x86/alternatives: Macrofy lock prefixes to work around GCC inlining bugsNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block - i.e. to macrify the affected block. As a result GCC considers the inline assembly block as a single instruction. This patch handles the LOCK prefix, allowing more aggresive inlining: text data bss dec hex filename 18140140 10225284 2957312 31322736 1ddf270 ./vmlinux before 18146889 10225380 2957312 31329581 1de0d2d ./vmlinux after (+6845) This is the reduction in non-inlined functions: Before: 40286 After: 40218 (-68) Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181003213100.189959-6-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04x86/refcount: Work around GCC inlining bugNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. The workaround is to set an assembly macro and call it from the inline assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it isn't, but that's the best we can get.) This patch allows GCC to inline simple functions such as __get_seccomp_filter(). To no-one's surprise the result is that GCC performs more aggressive (read: correct) inlining decisions in these senarios, which reduces the kernel size and presumably also speeds it up: text data bss dec hex filename 18140970 10225412 2957312 31323694 1ddf62e ./vmlinux before 18140140 10225284 2957312 31322736 1ddf270 ./vmlinux after (-958) 16 fewer static text symbols: Before: 40302 After: 40286 (-16) these got inlined instead. Functions such as kref_get(), free_user(), fuse_file_get() now get inlined. Hurray! [ mingo: Rewrote the changelog. ] Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jan Beulich <JBeulich@suse.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20181003213100.189959-5-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04x86/objtool: Use asm macros to work around GCC inlining bugsNadav Amit
As described in: 77b0bf55bc67: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs") GCC's inlining heuristics are broken with common asm() patterns used in kernel code, resulting in the effective disabling of inlining. In the case of objtool the resulting borkage can be significant, since all the annotations of objtool are discarded during linkage and never inlined, yet GCC bogusly considers most functions affected by objtool annotations as 'too large'. The workaround is to set an assembly macro and call it from the inline assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it isn't, but that's the best we can get.) This increases the kernel size slightly: text data bss dec hex filename 18140829 10224724 2957312 31322865 1ddf2f1 ./vmlinux before 18140970 10225412 2957312 31323694 1ddf62e ./vmlinux after (+829) The number of static text symbols (i.e. non-inlined functions) is reduced: Before: 40321 After: 40302 (-19) [ mingo: Rewrote the changelog. ] Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Christopher Li <sparse@chrisli.org> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-sparse@vger.kernel.org Link: http://lkml.kernel.org/r/20181003213100.189959-4-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04kbuild/Makefile: Prepare for using macros in inline assembly code to work ↵Nadav Amit
around asm() related GCC inlining bugs Using macros in inline assembly allows us to work around bugs in GCC's inlining decisions. Compile macros.S and use it to assemble all C files. Currently only x86 will use it. Background: The inlining pass of GCC doesn't include an assembler, so it's not aware of basic properties of the generated code, such as its size in bytes, or that there are such things as discontiuous blocks of code and data due to the newfangled linker feature called 'sections' ... Instead GCC uses a lazy and fragile heuristic: it does a linear count of certain syntactic and whitespace elements in inlined assembly block source code, such as a count of new-lines and semicolons (!), as a poor substitute for "code size and complexity". Unsurprisingly this heuristic falls over and breaks its neck whith certain common types of kernel code that use inline assembly, such as the frequent practice of putting useful information into alternative sections. As a result of this fresh, 20+ years old GCC bug, GCC's inlining decisions are effectively disabled for inlined functions that make use of such asm() blocks, because GCC thinks those sections of code are "large" - when in reality they are often result in just a very low number of machine instructions. This absolute lack of inlining provess when GCC comes across such asm() blocks both increases generated kernel code size and causes performance overhead, which is particularly noticeable on paravirt kernels, which make frequent use of these inlining facilities in attempt to stay out of the way when running on baremetal hardware. Instead of fixing the compiler we use a workaround: we set an assembly macro and call it from the inlined assembly block. As a result GCC considers the inline assembly block as a single instruction. (Which it often isn't but I digress.) This uglifies and bloats the source code - for example just the refcount related changes have this impact: Makefile | 9 +++++++-- arch/x86/Makefile | 7 +++++++ arch/x86/kernel/macros.S | 7 +++++++ scripts/Kbuild.include | 4 +++- scripts/mod/Makefile | 2 ++ 5 files changed, 26 insertions(+), 3 deletions(-) Yay readability and maintainability, it's not like assembly code is hard to read and maintain ... We also hope that GCC will eventually get fixed, but we are not holding our breath for that. Yet we are optimistic, it might still happen, any decade now. [ mingo: Wrote new changelog describing the background. ] Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michal Marek <michal.lkml@markovi.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kbuild@vger.kernel.org Link: http://lkml.kernel.org/r/20181003213100.189959-3-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04kbuild/arch/xtensa: Define LINKER_SCRIPT for the linker scriptNadav Amit
Define the LINKER_SCRIPT when building the linker script as being done in other architectures. This is required, because upcoming Makefile changes would otherwise break things. Signed-off-by: Nadav Amit <namit@vmware.com> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Chris Zankel <chris@zankel.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michal Marek <michal.lkml@markovi.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-xtensa@linux-xtensa.org Link: http://lkml.kernel.org/r/20181003213100.189959-2-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04Merge branch 'linus' into x86/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-03locking/lockdep: Add a faster path in __lock_release()Waiman Long
When __lock_release() is called, the most likely unlock scenario is on the innermost lock in the chain. In this case, we can skip some of the checks and provide a faster path to completion. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1538511560-10090-4-git-send-email-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-03locking/lockdep: Eliminate redundant IRQs check in __lock_acquire()Waiman Long
The static __lock_acquire() function has only two callers: 1) lock_acquire() 2) reacquire_held_locks() In lock_acquire(), raw_local_irq_save() is called beforehand. So IRQs must have been disabled. So the check: DEBUG_LOCKS_WARN_ON(!irqs_disabled()) is kind of redundant in this case. So move the above check to reacquire_held_locks() to eliminate redundant code in the lock_acquire() path. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1538511560-10090-3-git-send-email-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-03locking/lockdep: Remove add_chain_cache_classes()Waiman Long
The inline function add_chain_cache_classes() is defined, but has no caller. Just remove it. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1538511560-10090-2-git-send-email-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02Merge tag 'fbdev-v4.19-rc7' of https://github.com/bzolnier/linuxGreg Kroah-Hartman
Bartlomiej writes: "fbdev fixes for v4.19-rc7: - fix OMAPFB_MEMORY_READ ioctl to not leak kernel memory in omapfb driver (Tomi Valkeinen) - add missing prepare/unprepare clock operations in pxa168fb driver (Lubomir Rintel) - add nobgrt option in efifb driver to disable ACPI BGRT logo restore (Hans de Goede) - fix spelling mistake in fall-through annotation in stifb driver (Gustavo A. R. Silva) - fix URL for uvesafb repository in the documentation (Adam Jackson)" * tag 'fbdev-v4.19-rc7' of https://github.com/bzolnier/linux: video/fbdev/stifb: Fix spelling mistake in fall-through annotation uvesafb: Fix URLs in the documentation efifb: BGRT: Add nobgrt option fbdev/omapfb: fix omapfb_memory_read infoleak pxa168fb: prepare the clock
2018-10-02Merge tag 'mmc-v4.19-rc4' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc Ulf writes: "MMC core: - Fixup conversion of debounce time to/from ms/us MMC host: - sdhi: Fixup whitelisting for Gen3 types" * tag 'mmc-v4.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: mmc: slot-gpio: Fix debounce time to use miliseconds again mmc: core: Fix debounce time to use microseconds mmc: sdhi: sys_dmac: check for all Gen3 types when whitelisting
2018-10-02Documentation/lockstat: Fix trivial typoAndrew Murray
Fix incorrect line number in example output Signed-off-by: Andrew Murray <andrew.murray@arm.com> Cc: Jiri Kosina <trivial@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-doc@vger.kernel.org Link: http://lkml.kernel.org/r/1538391663-54524-1-git-send-email-andrew.murray@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02locking/memory-barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()Andrea Parri
Amend the changes in commit: 1f03e8d2919270 ("locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()") ... by updating the documentation accordingly. Also remove some obsolete information related to the implementation. Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Cc: Akira Yokosawa <akiyks@gmail.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Daniel Lustig <dlustig@nvidia.com> Cc: David Howells <dhowells@redhat.com> Cc: Jade Alglave <j.alglave@ucl.ac.uk> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Luc Maranget <luc.maranget@inria.fr> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: linux-arch@vger.kernel.org Cc: parri.andrea@gmail.com Link: http://lkml.kernel.org/r/20180926182920.27644-5-paulmck@linux.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02tools/memory-model: Add more LKMM limitationsPaul E. McKenney
This commit adds more detail about compiler optimizations and not-yet-modeled Linux-kernel APIs. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: akiyks@gmail.com Cc: boqun.feng@gmail.com Cc: dhowells@redhat.com Cc: j.alglave@ucl.ac.uk Cc: linux-arch@vger.kernel.org Cc: luc.maranget@inria.fr Cc: npiggin@gmail.com Cc: parri.andrea@gmail.com Cc: stern@rowland.harvard.edu Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/20180926182920.27644-4-paulmck@linux.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02tools/memory-model: Fix a README typoSeongJae Park
This commit fixes a duplicate-"the" typo in README. Signed-off-by: SeongJae Park <sj38.park@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: akiyks@gmail.com Cc: boqun.feng@gmail.com Cc: dhowells@redhat.com Cc: j.alglave@ucl.ac.uk Cc: linux-arch@vger.kernel.org Cc: luc.maranget@inria.fr Cc: npiggin@gmail.com Cc: parri.andrea@gmail.com Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/20180926182920.27644-3-paulmck@linux.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02tools/memory-model: Add extra ordering for locks and remove it for ordinary ↵Alan Stern
release/acquire More than one kernel developer has expressed the opinion that the LKMM should enforce ordering of writes by locking. In other words, given the following code: WRITE_ONCE(x, 1); spin_unlock(&s): spin_lock(&s); WRITE_ONCE(y, 1); the stores to x and y should be propagated in order to all other CPUs, even though those other CPUs might not access the lock s. In terms of the memory model, this means expanding the cumul-fence relation. Locks should also provide read-read (and read-write) ordering in a similar way. Given: READ_ONCE(x); spin_unlock(&s); spin_lock(&s); READ_ONCE(y); // or WRITE_ONCE(y, 1); the load of x should be executed before the load of (or store to) y. The LKMM already provides this ordering, but it provides it even in the case where the two accesses are separated by a release/acquire pair of fences rather than unlock/lock. This would prevent architectures from using weakly ordered implementations of release and acquire, which seems like an unnecessary restriction. The patch therefore removes the ordering requirement from the LKMM for that case. There are several arguments both for and against this change. Let us refer to these enhanced ordering properties by saying that the LKMM would require locks to be RCtso (a bit of a misnomer, but analogous to RCpc and RCsc) and it would require ordinary acquire/release only to be RCpc. (Note: In the following, the phrase "all supported architectures" is meant not to include RISC-V. Although RISC-V is indeed supported by the kernel, the implementation is still somewhat in a state of flux and therefore statements about it would be premature.) Pros: The kernel already provides RCtso ordering for locks on all supported architectures, even though this is not stated explicitly anywhere. Therefore the LKMM should formalize it. In theory, guaranteeing RCtso ordering would reduce the need for additional barrier-like constructs meant to increase the ordering strength of locks. Will Deacon and Peter Zijlstra are strongly in favor of formalizing the RCtso requirement. Linus Torvalds and Will would like to go even further, requiring locks to have RCsc behavior (ordering preceding writes against later reads), but they recognize that this would incur a noticeable performance degradation on the POWER architecture. Linus also points out that people have made the mistake, in the past, of assuming that locking has stronger ordering properties than is currently guaranteed, and this change would reduce the likelihood of such mistakes. Not requiring ordinary acquire/release to be any stronger than RCpc may prove advantageous for future architectures, allowing them to implement smp_load_acquire() and smp_store_release() with more efficient machine instructions than would be possible if the operations had to be RCtso. Will and Linus approve this rationale, hypothetical though it is at the moment (it may end up affecting the RISC-V implementation). The same argument may or may not apply to RMW-acquire/release; see also the second Con entry below. Linus feels that locks should be easy for people to use without worrying about memory consistency issues, since they are so pervasive in the kernel, whereas acquire/release is much more of an "experts only" tool. Requiring locks to be RCtso is a step in this direction. Cons: Andrea Parri and Luc Maranget think that locks should have the same ordering properties as ordinary acquire/release (indeed, Luc points out that the names "acquire" and "release" derive from the usage of locks). Andrea points out that having different ordering properties for different forms of acquires and releases is not only unnecessary, it would also be confusing and unmaintainable. Locks are constructed from lower-level primitives, typically RMW-acquire (for locking) and ordinary release (for unlock). It is illogical to require stronger ordering properties from the high-level operations than from the low-level operations they comprise. Thus, this change would make while (cmpxchg_acquire(&s, 0, 1) != 0) cpu_relax(); an incorrect implementation of spin_lock(&s) as far as the LKMM is concerned. In theory this weakness can be ameliorated by changing the LKMM even further, requiring RMW-acquire/release also to be RCtso (which it already is on all supported architectures). As far as I know, nobody has singled out any examples of code in the kernel that actually relies on locks being RCtso. (People mumble about RCU and the scheduler, but nobody has pointed to any actual code. If there are any real cases, their number is likely quite small.) If RCtso ordering is not needed, why require it? A handful of locking constructs (qspinlocks, qrwlocks, and mcs_spinlocks) are built on top of smp_cond_load_acquire() instead of an RMW-acquire instruction. It currently provides only the ordinary acquire semantics, not the stronger ordering this patch would require of locks. In theory this could be ameliorated by requiring smp_cond_load_acquire() in combination with ordinary release also to be RCtso (which is currently true on all supported architectures). On future weakly ordered architectures, people may be able to implement locks in a non-RCtso fashion with significant performance improvement. Meeting the RCtso requirement would necessarily add run-time overhead. Overall, the technical aspects of these arguments seem relatively minor, and it appears mostly to boil down to a matter of opinion. Since the opinions of senior kernel maintainers such as Linus, Peter, and Will carry more weight than those of Luc and Andrea, this patch changes the model in accordance with the maintainers' wishes. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: akiyks@gmail.com Cc: boqun.feng@gmail.com Cc: dhowells@redhat.com Cc: j.alglave@ucl.ac.uk Cc: linux-arch@vger.kernel.org Cc: luc.maranget@inria.fr Cc: npiggin@gmail.com Cc: parri.andrea@gmail.com Link: http://lkml.kernel.org/r/20180926182920.27644-2-paulmck@linux.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02tools/memory-model: Add litmus-test naming schemePaul E. McKenney
This commit documents the scheme used to generate the names for the litmus tests. [ paulmck: Apply feedback from Andrea Parri and Will Deacon. ] Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: akiyks@gmail.com Cc: boqun.feng@gmail.com Cc: dhowells@redhat.com Cc: j.alglave@ucl.ac.uk Cc: linux-arch@vger.kernel.org Cc: luc.maranget@inria.fr Cc: npiggin@gmail.com Cc: parri.andrea@gmail.com Cc: stern@rowland.harvard.edu Link: http://lkml.kernel.org/r/20180926182920.27644-1-paulmck@linux.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02locking/spinlocks: Remove an instruction from spin and write locksMatthew Wilcox
Both spin locks and write locks currently do: f0 0f b1 17 lock cmpxchg %edx,(%rdi) 85 c0 test %eax,%eax 75 05 jne [slowpath] This 'test' insn is superfluous; the cmpxchg insn sets the Z flag appropriately. Peter pointed out that using atomic_try_cmpxchg_acquire() will let the compiler know this is true. Comparing before/after disassemblies show the only effect is to remove this insn. Take this opportunity to make the spin & write lock code resemble each other more closely and have similar likely() hints. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Matthew Wilcox <willy@infradead.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman Long <longman@redhat.com> Link: http://lkml.kernel.org/r/20180820162639.GC25153@bombadil.infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02jump_label: Fix NULL dereference bug in __jump_label_mod_update()Ard Biesheuvel
Commit 19483677684b ("jump_label: Annotate entries that operate on __init code earlier") refactored the code that manages runtime patching of jump labels in modules that are tied to static keys defined in other modules or in the core kernel. In the latter case, we may iterate over the static_key_mod linked list until we hit the entry for the core kernel, whose 'mod' field will be NULL, and attempt to dereference it to get at its 'state' member. So let's add a non-NULL check: this forces the 'init' argument of __jump_label_update() to false for static keys that are defined in the core kernel, which is appropriate given that __init annotated jump_label entries in the core kernel should no longer be active at this point (i.e., when loading modules). Fixes: 19483677684b ("jump_label: Annotate entries that operate on ...") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Jessica Yu <jeyu@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20181001081324.11553-1-ard.biesheuvel@linaro.org
2018-10-02s390/vmlinux.lds: Move JUMP_TABLE_DATA into output sectionArd Biesheuvel
Commit e872267b8bcbb179 ("jump_table: move entries into ro_after_init region") moved the __jump_table input section into the __ro_after_init output section, but inadvertently put the macro in the wrong place in the s390 linker script. Let's fix that. Fixes: e872267b8bcbb179 ("jump_table: move entries into ro_after_init region") Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Tested-by: Guenter Roeck <linux@roeck-us.net> Cc: linux-s390@vger.kernel.org Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Jessica Yu <jeyu@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20180930164950.3841-1-ard.biesheuvel@linaro.org
2018-10-01Merge tag 'arm64-fixes' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Will writes: "Late arm64 fixes - Fix handling of young contiguous ptes for hugetlb mappings - Fix livelock when taking access faults on contiguous hugetlb mappings - Tighten up register accesses via KVM SET_ONE_REG ioctl()s" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: KVM: Sanitize PSTATE.M when being set from userspace arm64: KVM: Tighten guest core register access from userspace arm64: hugetlb: Avoid unnecessary clearing in huge_ptep_set_access_flags arm64: hugetlb: Fix handling of young ptes
2018-10-01Merge tag 'armsoc-fixes' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc Olof writes: "ARM: SoC fixes A handful of fixes that have been coming in the last couple of weeks: - Freescale fixes for on-chip accellerators - A DT fix for stm32 to avoid fallback to non-DMA SPI mode - Fixes for badly specified interrupts on BCM63xx SoCs - Allwinner A64 HDMI was incorrectly specified as fully compatble with R40 - Drive strength fix for SAMA5D2 NAND pins on one board" * tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: ARM: dts: stm32: update SPI6 dmas property on stm32mp157c soc: fsl: qe: Fix copy/paste bug in ucc_get_tdm_sync_shift() soc: fsl: qbman: qman: avoid allocating from non existing gen_pool ARM: dts: BCM63xx: Fix incorrect interrupt specifiers MAINTAINERS: update the Annapurna Labs maintainer email ARM: dts: sun8i: drop A64 HDMI PHY fallback compatible from R40 DT ARM: dts: at91: sama5d2_ptc_ek: fix nand pinctrl
2018-10-01Merge tag 'pstore-v4.19-rc7' of ↵Greg Kroah-Hartman
https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Kees writes: "Pstore fixes for v4.19-rc7 - Fix failure-path memory leak in ramoops_init (nixiaoming)" * tag 'pstore-v4.19-rc7' of https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: pstore/ram: Fix failure-path memory leak in ramoops_init
2018-10-01arm64: KVM: Sanitize PSTATE.M when being set from userspaceMarc Zyngier
Not all execution modes are valid for a guest, and some of them depend on what the HW actually supports. Let's verify that what userspace provides is compatible with both the VM settings and the HW capabilities. Cc: <stable@vger.kernel.org> Fixes: 0d854a60b1d7 ("arm64: KVM: enable initialization of a 32bit vcpu") Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-10-01arm64: KVM: Tighten guest core register access from userspaceDave Martin
We currently allow userspace to access the core register file in about any possible way, including straddling multiple registers and doing unaligned accesses. This is not the expected use of the ABI, and nobody is actually using it that way. Let's tighten it by explicitly checking the size and alignment for each field of the register file. Cc: <stable@vger.kernel.org> Fixes: 2f4a07c5f9fe ("arm64: KVM: guest one-reg interface") Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Dave Martin <Dave.Martin@arm.com> [maz: rewrote Dave's initial patch to be more easily backported] Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-10-01x86/build: Remove unused CONFIG_AS_CRC32Masahiro Yamada
CONFIG_AS_CRC32 is not used anywhere. Its last user was removed by 0cb6c969ed9d ("net, lib: kill arch_fast_hash library bits") Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lkml.kernel.org/r/1538389443-28514-1-git-send-email-yamada.masahiro@socionext.com
2018-09-30pstore/ram: Fix failure-path memory leak in ramoops_initKees Cook
As reported by nixiaoming, with some minor clarifications: 1) memory leak in ramoops_register_dummy(): dummy_data = kzalloc(sizeof(*dummy_data), GFP_KERNEL); but no kfree() if platform_device_register_data() fails. 2) memory leak in ramoops_init(): Missing platform_device_unregister(dummy) and kfree(dummy_data) if platform_driver_register(&ramoops_driver) fails. I've clarified the purpose of ramoops_register_dummy(), and added a common cleanup routine for all three failure paths to call. Reported-by: nixiaoming <nixiaoming@huawei.com> Cc: stable@vger.kernel.org Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Joel Fernandes <joelaf@google.com> Cc: Geliang Tang <geliangtang@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2018-09-30Linux 4.19-rc6Greg Kroah-Hartman
2018-09-30Merge tag 'auxdisplay-for-greg-v4.19-rc6' of https://github.com/ojeda/linuxGreg Kroah-Hartman
Miguel writes: "A trivial fix for auxdisplay - MAINTAINERS reference fix for moved file Reported by Joe Perches" * tag 'auxdisplay-for-greg-v4.19-rc6' of https://github.com/ojeda/linux: MAINTAINERS: fix reference to moved drivers/{misc => auxdisplay}/panel.c
2018-09-30Merge tag 'libnvdimm-fixes2-4.19-rc6' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm Dan writes: "filesystem-dax for 4.19-rc6 Fix a deadlock in the new for 4.19 dax_lock_mapping_entry() routine." * tag 'libnvdimm-fixes2-4.19-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: dax: Fix deadlock in dax_lock_mapping_entry()
2018-09-30MAINTAINERS: fix reference to moved drivers/{misc => auxdisplay}/panel.cMiguel Ojeda
Commit 51c1e9b554c9 ("auxdisplay: Move panel.c to drivers/auxdisplay folder") moved the file, but the MAINTAINERS reference was not updated. Link: https://lore.kernel.org/lkml/20180928220131.31075-1-joe@perches.com/ Reported-by: Joe Perches <joe@perches.com> Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2018-09-29Merge tag 'for-linus-20180929' of git://git.kernel.dk/linux-blockGreg Kroah-Hartman
Jens writes: "Block fixes for 4.19-rc6 A set of fixes that should go into this release. This pull request contains: - A fix (hopefully) for the persistent grants for xen-blkfront. A previous fix from this series wasn't complete, hence reverted, and this one should hopefully be it. (Boris Ostrovsky) - Fix for an elevator drain warning with SMR devices, which is triggered when you switch schedulers (Damien) - bcache deadlock fix (Guoju Fang) - Fix for the block unplug tracepoint, which has had the timer/explicit flag reverted since 4.11 (Ilya) - Fix a regression in this series where the blk-mq timeout hook is invoked with the RCU read lock held, hence preventing it from blocking (Keith) - NVMe pull from Christoph, with a single multipath fix (Susobhan Dey)" * tag 'for-linus-20180929' of git://git.kernel.dk/linux-block: xen/blkfront: correct purging of persistent grants Revert "xen/blkfront: When purging persistent grants, keep them in the buffer" blk-mq: I/O and timer unplugs are inverted in blktrace bcache: add separate workqueue for journal_write to avoid deadlock xen/blkfront: When purging persistent grants, keep them in the buffer block: fix deadline elevator drain for zoned block devices blk-mq: Allow blocking queue tag iter callbacks nvme: properly propagate errors in nvme_mpath_init
2018-09-29Merge branch 'x86-urgent-for-linus' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Thomas writes: "A single fix for the AMD memory encryption boot code so it does not read random garbage instead of the cached encryption bit when a kexec kernel is allocated above the 32bit address limit." * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/boot: Fix kexec booting failure in the SEV bit detection code
2018-09-29Merge branch 'timers-urgent-for-linus' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Thomas writes: "Three small fixes for clocksource drivers: - Proper error handling in the Atmel PIT driver - Add CLOCK_SOURCE_SUSPEND_NONSTOP for TI SoCs so suspend works again - Fix the next event function for Facebook Backpack-CMM BMC chips so usleep(100) doesnt sleep several milliseconds" * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: clocksource/drivers/timer-atmel-pit: Properly handle error cases clocksource/drivers/fttmr010: Fix set_next_event handler clocksource/drivers/ti-32k: Add CLOCK_SOURCE_SUSPEND_NONSTOP flag for non-am43 SoCs
2018-09-29Merge branch 'perf-urgent-for-linus' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Thomas writes: "A single fix for a missing sanity check when a pinned event is tried to be read on the wrong CPU due to a legit event scheduling failure." * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/core: Add sanity check to deal with pinned event failure
2018-09-29Merge tag 'pm-4.19-rc6' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Rafael writes: "Power management fix for 4.19-rc6 Fix incorrect __init and __exit annotations in the Qualcomm Kryo cpufreq driver (Nathan Chancellor)." * tag 'pm-4.19-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: cpufreq: qcom-kryo: Fix section annotations
2018-09-29cpufreq: qcom-kryo: Fix section annotationsNathan Chancellor
There is currently a warning when building the Kryo cpufreq driver into the kernel image: WARNING: vmlinux.o(.text+0x8aa424): Section mismatch in reference from the function qcom_cpufreq_kryo_probe() to the function .init.text:qcom_cpufreq_kryo_get_msm_id() The function qcom_cpufreq_kryo_probe() references the function __init qcom_cpufreq_kryo_get_msm_id(). This is often because qcom_cpufreq_kryo_probe lacks a __init annotation or the annotation of qcom_cpufreq_kryo_get_msm_id is wrong. Remove the '__init' annotation from qcom_cpufreq_kryo_get_msm_id so that there is no more mismatch warning. Additionally, Nick noticed that the remove function was marked as '__init' when it should really be marked as '__exit'. Fixes: 46e2856b8e18 (cpufreq: Add Kryo CPU scaling driver) Fixes: 5ad7346b4ae2 (cpufreq: kryo: Add module remove and exit) Reported-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: 4.18+ <stable@vger.kernel.org> # 4.18+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2018-09-29Merge tag 'dma-mapping-4.19-3' of git://git.infradead.org/users/hch/dma-mappingGreg Kroah-Hartman
Christoph writes: "dma mapping fix for 4.19-rc6 fix a missing Kconfig symbol for commits introduced in 4.19-rc" * tag 'dma-mapping-4.19-3' of git://git.infradead.org/users/hch/dma-mapping: dma-mapping: add the missing ARCH_HAS_SYNC_DMA_FOR_CPU_ALL declaration