summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-07-24dt-bindings: arm: amazon: add missing alpine-v2 DT bindingHanna Hawa
Amazon Annapurna Labs Alpine family includes: Alpine-v1, Alpine-v2. This patch adds the missing DT binding of Alpine-v2 in amazon,al.yaml. Link: https://lore.kernel.org/r/20200724132654.16549-5-hhhawa@amazon.com Signed-off-by: Hanna Hawa <hhhawa@amazon.com> Reviewed-by: Rob Herring <robh@kernel.org> Acked-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-07-24dt-bindings: arm: amazon: update maintainers of amazon,al DT bindingsHanna Hawa
Update maintainers of amazon,al DT bindings. Link: https://lore.kernel.org/r/20200724132654.16549-4-hhhawa@amazon.com Signed-off-by: Hanna Hawa <hhhawa@amazon.com> Acked-by: Rob Herring <robh@kernel.org> Acked-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-07-24arm64: dts: amazon: rename al folder to be amazonHanna Hawa
As preparation to add device tree binding for Amazon's Annapurna Labs Alpine v3 support. Rename al device tree folder to be amazon. Link: https://lore.kernel.org/r/20200724132654.16549-3-hhhawa@amazon.com Signed-off-by: Hanna Hawa <hhhawa@amazon.com> Acked-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-07-24dt-bindings: arm: amazon: rename al,alpine DT binding to amazon,alHanna Hawa
As preparation to add device tree binding for Amazon's Annapurna Labs Alpine v3 support. Rename al,alpine DT binding to amazon,al. Link: https://lore.kernel.org/r/20200724132654.16549-2-hhhawa@amazon.com Signed-off-by: Hanna Hawa <hhhawa@amazon.com> Acked-by: Rob Herring <robh@kernel.org> Acked-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-07-24uprobes: Change handle_swbp() to send SIGTRAP with si_code=SI_KERNEL, to fix ↵Oleg Nesterov
GDB regression If a tracee is uprobed and it hits int3 inserted by debugger, handle_swbp() does send_sig(SIGTRAP, current, 0) which means si_code == SI_USER. This used to work when this code was written, but then GDB started to validate si_code and now it simply can't use breakpoints if the tracee has an active uprobe: # cat test.c void unused_func(void) { } int main(void) { return 0; } # gcc -g test.c -o test # perf probe -x ./test -a unused_func # perf record -e probe_test:unused_func gdb ./test -ex run GNU gdb (GDB) 10.0.50.20200714-git ... Program received signal SIGTRAP, Trace/breakpoint trap. 0x00007ffff7ddf909 in dl_main () from /lib64/ld-linux-x86-64.so.2 (gdb) The tracee hits the internal breakpoint inserted by GDB to monitor shared library events but GDB misinterprets this SIGTRAP and reports a signal. Change handle_swbp() to use force_sig(SIGTRAP), this matches do_int3_user() and fixes the problem. This is the minimal fix for -stable, arch/x86/kernel/uprobes.c is equally wrong; it should use send_sigtrap(TRAP_TRACE) instead of send_sig(SIGTRAP), but this doesn't confuse GDB and needs another x86-specific patch. Reported-by: Aaron Merey <amerey@redhat.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200723154420.GA32043@redhat.com
2020-07-24x86/kvm: Use generic xfer to guest work functionThomas Gleixner
Use the generic infrastructure to check for and handle pending work before transitioning into guest mode. This now handles TIF_NOTIFY_RESUME as well which was ignored so far. Handling it is important as this covers task work and task work will be used to offload the heavy lifting of POSIX CPU timers to thread context. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220520.979724969@linutronix.de
2020-07-24x86/entry: Cleanup idtentry_enter/exitThomas Gleixner
Remove the temporary defines and fixup all references. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.855839271@linutronix.de
2020-07-24x86/entry: Use generic interrupt entry/exit codeThomas Gleixner
Replace the x86 code with the generic variant. Use temporary defines for idtentry_* which will be cleaned up in the next step. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220520.711492752@linutronix.de
2020-07-24x86/entry: Cleanup idtentry_entry/exit_userThomas Gleixner
Cleanup the temporary defines and use irqentry_ instead of idtentry_. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.602603691@linutronix.de
2020-07-24x86/entry: Use generic syscall exit functionalityThomas Gleixner
Replace the x86 variant with the generic version. Provide the relevant architecture specific helper functions and defines. Use a temporary define for idtentry_exit_user which will be cleaned up seperately. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.494648601@linutronix.de
2020-07-24x86/entry: Use generic syscall entry functionThomas Gleixner
Replace the syscall entry work handling with the generic version. Provide the necessary helper inlines to handle the real architecture specific parts, e.g. ptrace. Use a temporary define for idtentry_enter_user which will be cleaned up seperately. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.376213694@linutronix.de
2020-07-24x86/ptrace: Provide pt_regs helper for entry/exitThomas Gleixner
As a preparatory step for moving the syscall and interrupt entry/exit handling into generic code, provide a pt_regs helper which retrieves the interrupt state from pt_regs. This is required to check whether interrupts are reenabled by return from interrupt/exception. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220520.258511584@linutronix.de
2020-07-24x86/entry: Move user return notifier out of loopThomas Gleixner
Guests and user space share certain MSRs. KVM sets these MSRs to guest values once and does not set them back to user space values on every VM exit to spare the costly MSR operations. User return notifiers ensure that these MSRs are set back to the correct values before returning to user space in exit_to_usermode_loop(). There is no reason to evaluate the TIF flag indicating that user return notifiers need to be invoked in the loop. The important point is that they are invoked before returning to user space. Move the invocation out of the loop into the section which does the last preperatory steps before returning to user space. That section is not preemptible and runs with interrupts disabled until the actual return. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220520.159112003@linutronix.de
2020-07-24x86/entry: Consolidate 32/64 bit syscall entryThomas Gleixner
64bit and 32bit entry code have the same open coded syscall entry handling after the bitwidth specific bits. Move it to a helper function and share the code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220520.051234096@linutronix.de
2020-07-24x86/entry: Consolidate check_user_regs()Thomas Gleixner
The user register sanity check is sprinkled all over the place. Move it into enter_from_user_mode(). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220519.943016204@linutronix.de
2020-07-24Merge branch 'core/entry' into x86/entryThomas Gleixner
Pick up generic entry code to migrate x86 over.
2020-07-24entry: Provide infrastructure for work before transitioning to guest modeThomas Gleixner
Entering a guest is similar to exiting to user space. Pending work like handling signals, rescheduling, task work etc. needs to be handled before that. Provide generic infrastructure to avoid duplication of the same handling code all over the place. The transfer to guest mode handling is different from the exit to usermode handling, e.g. vs. rseq and live patching, so a separate function is used. The initial list of work items handled is: TIF_SIGPENDING, TIF_NEED_RESCHED, TIF_NOTIFY_RESUME Architecture specific TIF flags can be added via defines in the architecture specific include files. The calling convention is also different from the syscall/interrupt entry functions as KVM invokes this from the outer vcpu_run() loop with interrupts and preemption enabled. To prevent missing a pending work item it invokes a check for pending TIF work from interrupt disabled code right before transitioning to guest mode. The lockdep, RCU and tracing state handling is also done directly around the switch to and from guest mode. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220519.833296398@linutronix.de
2020-07-24entry: Provide generic interrupt entry/exit codeThomas Gleixner
Like the syscall entry/exit code interrupt/exception entry after the real low level ASM bits should not be different accross architectures. Provide a generic version based on the x86 code. irqentry_enter() is called after the low level entry code and irqentry_exit() must be invoked right before returning to the low level code which just contains the actual return logic. The code before irqentry_enter() and irqentry_exit() must not be instrumented. Code after irqentry_enter() and before irqentry_exit() can be instrumented. irqentry_enter() invokes irqentry_enter_from_user_mode() if the interrupt/exception came from user mode. If if entered from kernel mode it handles the kernel mode variant of establishing state for lockdep, RCU and tracing depending on the kernel context it interrupted (idle, non-idle). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200722220519.723703209@linutronix.de
2020-07-24entry: Provide generic syscall exit functionThomas Gleixner
Like syscall entry all architectures have similar and pointlessly different code to handle pending work before returning from a syscall to user space. 1) One-time syscall exit work: - rseq syscall exit - audit - syscall tracing - tracehook (single stepping) 2) Preparatory work - Exit to user mode loop (common TIF handling). - Architecture specific one time work arch_exit_to_user_mode_prepare() - Address limit and lockdep checks 3) Final transition (lockdep, tracing, context tracking, RCU). Invokes arch_exit_to_user_mode() to handle e.g. speculation mitigations Provide a generic version based on the x86 code which has all the RCU and instrumentation protections right. Provide a variant for interrupt return to user mode as well which shares the above #2 and #3 work items. After syscall_exit_to_user_mode() and irqentry_exit_to_user_mode() the architecture code just has to return to user space. The code after returning from these functions must not be instrumented. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220519.613977173@linutronix.de
2020-07-24entry: Provide generic syscall entry functionalityThomas Gleixner
On syscall entry certain work needs to be done: - Establish state (lockdep, context tracking, tracing) - Conditional work (ptrace, seccomp, audit...) This code is needlessly duplicated and different in all architectures. Provide a generic version based on the x86 implementation which has all the RCU and instrumentation bits right. As interrupt/exception entry from user space needs parts of the same functionality, provide a function for this as well. syscall_enter_from_user_mode() and irqentry_enter_from_user_mode() must be called right after the low level ASM entry. The calling code must be non-instrumentable. After the functions returns state is correct and the subsequent functions can be instrumented. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220519.513463269@linutronix.de
2020-07-24seccomp: Provide stub for __secure_computing()Thomas Gleixner
To avoid #ifdeffery in the upcoming generic syscall entry work code provide a stub for __secure_computing() as this is preferred over secure_computing() because the TIF flag is already evaluated. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200722220519.404974280@linutronix.de
2020-07-24x86/defconfigs: Remove CONFIG_CRYPTO_AES_586 from i386_defconfigSedat Dilek
Initially CONFIG_CRYPTO_AES_586=y was added to the i386_defconfig file in: c1b362e3b4d3: ("x86: update defconfigs") The code and Kconfig for CONFIG_CRYPTO_AES_586 was removed in: 1d2c3279311e: ("crypto: x86/aes - drop scalar assembler implementations") Remove the leftover from the i386_defconfig file as well. Signed-off-by: Sedat Dilek <sedat.dilek@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20200723171119.9881-1-sedat.dilek@gmail.com
2020-07-24sched: Warn if garbage is passed to default_wake_function()Chris Wilson
Since the default_wake_function() passes its flags onto try_to_wake_up(), warn if those flags collide with internal values. Given that the supplied flags are garbage, no repair can be done but at least alert the user to the damage they are causing. In the belief that these errors should be picked up during testing, the warning is only compiled in under CONFIG_SCHED_DEBUG. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: https://lore.kernel.org/r/20200723201042.18861-1-chris@chris-wilson.co.uk
2020-07-24arm64/vdso: Add time namespace pageAndrei Vagin
Allocate the time namespace page among VVAR pages. Provide __arch_get_timens_vdso_data() helper for VDSO code to get the code-relative position of VVARs on that special page. If a task belongs to a time namespace then the VVAR page which contains the system wide VDSO data is replaced with a namespace specific page which has the same layout as the VVAR page. That page has vdso_data->seq set to 1 to enforce the slow path and vdso_data->clock_mode set to VCLOCK_TIMENS to enforce the time namespace handling path. The extra check in the case that vdso_data->seq is odd, e.g. a concurrent update of the VDSO data is in progress, is not really affecting regular tasks which are not part of a time namespace as the task is spin waiting for the update to finish and vdso_data->seq to become even again. If a time namespace task hits that code path, it invokes the corresponding time getter function which retrieves the real VVAR page, reads host time and then adds the offset for the requested clock which is stored in the special VVAR page. The time-namespace page isn't allocated on !CONFIG_TIME_NAMESPACE, but vma is the same size, which simplifies criu/vdso migration between different kernel configs. Signed-off-by: Andrei Vagin <avagin@gmail.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200624083321.144975-4-avagin@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24arm64/vdso: Zap vvar pages when switching to a time namespaceAndrei Vagin
The order of vvar pages depends on whether a task belongs to the root time namespace or not. In the root time namespace, a task doesn't have a per-namespace page. In a non-root namespace, the VVAR page which contains the system-wide VDSO data is replaced with a namespace specific page that contains clock offsets. Whenever a task changes its namespace, the VVAR page tables are cleared and then they will be re-faulted with a corresponding layout. A task can switch its time namespace only if its ->mm isn't shared with another task. Signed-off-by: Andrei Vagin <avagin@gmail.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Reviewed-by: Christian Brauner <christian.brauner@ubuntu.com> Link: https://lore.kernel.org/r/20200624083321.144975-3-avagin@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24arm64/vdso: use the fault callback to map vvar pagesAndrei Vagin
Currently the vdso has no awareness of time namespaces, which may apply distinct offsets to processes in different namespaces. To handle this within the vdso, we'll need to expose a per-namespace data page. As a preparatory step, this patch separates the vdso data page from the code pages, and has it faulted in via its own fault callback. Subsquent patches will extend this to support distinct pages per time namespace. The vvar vma has to be installed with the VM_PFNMAP flag to handle faults via its vma fault callback. Signed-off-by: Andrei Vagin <avagin@gmail.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Dmitry Safonov <dima@arista.com> Link: https://lore.kernel.org/r/20200624083321.144975-2-avagin@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24compiler.h: Move instrumentation_begin()/end() to new ↵Ingo Molnar
<linux/instrumentation.h> header Linus pointed out that compiler.h - which is a key header that gets included in every single one of the 28,000+ kernel files during a kernel build - was bloated in: 655389666643: ("vmlinux.lds.h: Create section for protection against instrumentation") Linus noted: > I have pulled this, but do we really want to add this to a header file > that is _so_ core that it gets included for basically every single > file built? > > I don't even see those instrumentation_begin/end() things used > anywhere right now. > > It seems excessive. That 53 lines is maybe not a lot, but it pushed > that header file to over 12kB, and while it's mostly comments, it's > extra IO and parsing basically for _every_ single file compiled in the > kernel. > > For what appears to be absolutely zero upside right now, and I really > don't see why this should be in such a core header file! Move these primitives into a new header: <linux/instrumentation.h>, and include that header in the headers that make use of it. Unfortunately one of these headers is asm-generic/bug.h, which does get included in a lot of places, similarly to compiler.h. So the de-bloating effect isn't as good as we'd like it to be - but at least the interfaces are defined separately. No change to functionality intended. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200604071921.GA1361070@gmail.com Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Peter Zijlstra <peterz@infradead.org>
2020-07-24recordmcount: only record relocation of type R_AARCH64_CALL26 on arm64.Gregory Herrero
Currently, if a section has a relocation to '_mcount' symbol, a new __mcount_loc entry will be added whatever the relocation type is. This is problematic when a relocation to '_mcount' is in the middle of a section and is not a call for ftrace use. Such relocation could be generated with below code for example: bool is_mcount(unsigned long addr) { return (target == (unsigned long) &_mcount); } With this snippet of code, ftrace will try to patch the mcount location generated by this code on module load and fail with: Call trace: ftrace_bug+0xa0/0x28c ftrace_process_locs+0x2f4/0x430 ftrace_module_init+0x30/0x38 load_module+0x14f0/0x1e78 __do_sys_finit_module+0x100/0x11c __arm64_sys_finit_module+0x28/0x34 el0_svc_common+0x88/0x194 el0_svc_handler+0x38/0x8c el0_svc+0x8/0xc ---[ end trace d828d06b36ad9d59 ]--- ftrace failed to modify [<ffffa2dbf3a3a41c>] 0xffffa2dbf3a3a41c actual: 66:a9:3c:90 Initializing ftrace call sites ftrace record flags: 2000000 (0) expected tramp: ffffa2dc6cf66724 So Limit the relocation type to R_AARCH64_CALL26 as in perl version of recordmcount. Fixes: af64d2aa872a ("ftrace: Add arm64 support to recordmcount") Signed-off-by: Gregory Herrero <gregory.herrero@oracle.com> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Link: https://lore.kernel.org/r/20200717143338.19302-1-gregory.herrero@oracle.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24arm64: Reserve HWCAP2_MTE as (1 << 18)Catalin Marinas
While MTE is not supported in the upstream kernel yet, add a comment that HWCAP2_MTE as (1 << 18) is reserved. Glibc makes use of it for the resolving (ifunc) of the MTE-safe string routines. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-24timers: Recalculate next timer interrupt only when necessaryFrederic Weisbecker
The nohz tick code recalculates the timer wheel's next expiry on each idle loop iteration. On the other hand, the base next expiry is now always cached and updated upon timer enqueue and execution. Only timer dequeue may leave base->next_expiry out of date (but then its stale value won't ever go past the actual next expiry to be recalculated). Since recalculating the next_expiry isn't a free operation, especially when the last wheel level is reached to find out that no timer has been enqueued at all, reuse the next expiry cache when it is known to be reliable, which it is most of the time. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200723151641.12236-1-frederic@kernel.org
2020-07-24serial: exar: Fix GPIO configuration for Sealevel cards based on XR17V35XMatthew Howell
Sealevel XR17V35X based devices are inoperable on kernel versions 4.11 and above due to a change in the GPIO preconfiguration introduced in commit 7dea8165f1d. This patch fixes this by preconfiguring the GPIO on Sealevel cards to the value (0x00) used prior to commit 7dea8165f1d With GPIOs preconfigured as per commit 7dea8165f1d all ports on Sealevel XR17V35X based devices become stuck in high impedance mode, regardless of dip-switch or software configuration. This causes the device to become effectively unusable. This patch (in various forms) has been distributed to our customers and no issues related to it have been reported. Fixes: 7dea8165f1d6 ("serial: exar: Preconfigure xr17v35x MPIOs as output") Signed-off-by: Matthew Howell <matthew.howell@sealevel.com> Link: https://lore.kernel.org/r/alpine.DEB.2.21.2007221605270.13247@tstest-VirtualBox Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-24drm/nouveau/fbcon: zero-initialise the mode_cmd2 structureBen Skeggs
This is tripping up the format modifier patches. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2020-07-24drm/nouveau/fbcon: fix module unload when fbcon init has failed for some reasonBen Skeggs
Stale pointer was tripping up the unload path. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2020-07-24drm/nouveau/kms/tu102: wait for core update to complete when assigning windowsBen Skeggs
Fixes a race on Turing between the core cross-channel error checks and the following window update. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2020-07-24drm/nouveau/kms/gf100: use correct format modifiersBen Skeggs
The disp015x classes are used by both gt21x and gf1xx (aside from gf119), but page kinds differ between Tesla and Fermi. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2020-07-24drm/nouveau/disp/gm200-: fix regression from HDA SOR selection changesBen Skeggs
Fixes: 9b5ca547bb8 ("drm/nouveau/disp/gm200-: detect and potentially disable HDA support on some SORs") Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
2020-07-24ARM: dts: armada-38x: fix NETA lockup when repeatedly switching speedsRussell King
To support the change in "phy: armada-38x: fix NETA lockup when repeatedly switching speeds" we need to update the DT with the additional register. Fixes: 14dc100b4411 ("phy: armada38x: add common phy support") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2020-07-24x86: Correct noinstr qualifiersIra Weiny
The noinstr qualifier is to be specified before the return type in the same way inline is used. These 2 cases were missed by previous patches. Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200723161405.852613-1-ira.weiny@intel.com
2020-07-24x86/mm: Drop unused MAX_PHYSADDR_BITSArvind Sankar
The macro is not used anywhere, and has an incorrect value (going by the comment) on x86_64 since commit c898faf91b3e ("x86: 46 bit physical address support on 64 bits") To avoid confusion, just remove the definition. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200723231544.17274-2-nivedita@alum.mit.edu
2020-07-24memory: samsung: exynos5422-dmc: Do not ignore return code of regmap_read()Krzysztof Kozlowski
Check for regmap_read() return code before using the read value in following write in exynos5_switch_timing_regs(). Pass reading error code to the callers. This does not introduce proper error handling for such failed reads (and obviously regmap_write() error is still ignored) because the driver ignored this in all places. Therefor it only fixes reported issue while matching current driver coding style: drivers/memory/samsung/exynos5422-dmc.c: In function 'exynos5_switch_timing_regs': >> drivers/memory/samsung/exynos5422-dmc.c:216:6: warning: variable 'ret' set but not used [-Wunused-but-set-variable] Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
2020-07-24tpm: Add support for event log pointer found in TPM2 ACPI tableStefan Berger
In case a TPM2 is attached, search for a TPM2 ACPI table when trying to get the event log from ACPI. If one is found, use it to get the start and length of the log area. This allows non-UEFI systems, such as SeaBIOS, to pass an event log when using a TPM2. Cc: Peter Huewe <peterhuewe@gmx.de> Cc: Jason Gunthorpe <jgg@ziepe.ca> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
2020-07-24acpi: Extend TPM2 ACPI table with missing log fieldsStefan Berger
Recent extensions of the TPM2 ACPI table added 3 more fields including 12 bytes of start method specific parameters and Log Area Minimum Length (u32) and Log Area Start Address (u64). So, we define a new structure acpi_tpm2_phy that holds these optional new fields. The new fields allow non-UEFI systems to access the TPM2's log. The specification that has the new fields is the following: TCG ACPI Specification Family "1.2" and "2.0" Version 1.2, Revision 8 https://trustedcomputinggroup.org/wp-content/uploads/TCG_ACPIGeneralSpecification_v1.20_r8.pdf Cc: linux-acpi@vger.kernel.org Cc: Len Brown <lenb@kernel.org> Signed-off-by: Stefan Berger <stefanb@linux.ibm.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
2020-07-24tpm: Unify the mismatching TPM space buffer sizesJarkko Sakkinen
The size of the buffers for storing context's and sessions can vary from arch to arch as PAGE_SIZE can be anything between 4 kB and 256 kB (the maximum for PPC64). Define a fixed buffer size set to 16 kB. This should be enough for most use with three handles (that is how many we allow at the moment). Parametrize the buffer size while doing this, so that it is easier to revisit this later on if required. Cc: stable@vger.kernel.org Reported-by: Stefan Berger <stefanb@linux.ibm.com> Fixes: 745b361e989a ("tpm: infrastructure for TPM spaces") Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Tested-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
2020-07-24ARM: dts: aspeed: wedge40: Enable pwm_tacho deviceTao Ren
Enable pwm_tacho device for fan control and monitoring in Wedge40. Signed-off-by: Tao Ren <rentao.bupt@gmail.com> Signed-off-by: Joel Stanley <joel@jms.id.au>
2020-07-24ARM: dts: aspeed: wedge40: Enable ADC deviceTao Ren
Enable ADC controller and corresponding voltage sensoring channels for Wedge40. Signed-off-by: Tao Ren <rentao.bupt@gmail.com> Signed-off-by: Joel Stanley <joel@jms.id.au>
2020-07-24ARM: dts: aspeed: wedge40: Disable unused i2c controllersTao Ren
Disable i2c bus #9, #10 and #13 as these i2c controllers are not used on Wedge40. Signed-off-by: Tao Ren <rentao.bupt@gmail.com> Signed-off-by: Joel Stanley <joel@jms.id.au>
2020-07-24ARM: dts: aspeed: cmm: Fixup I2C treeTao Ren
Create all the i2c switches in device tree and use aliases to assign child channels with consistent bus numbers. Besides, "i2c-mux-idle-disconnect" is set for all the i2c switches to avoid potential conflicts when multiple devices (beind the switches) use the same device address. Signed-off-by: Tao Ren <rentao.bupt@gmail.com> Reviewed-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Joel Stanley <joel@jms.id.au>
2020-07-24ARM: dts: aspeed: tacoma: Add CFAM reset GPIOJoel Stanley
The GPIO on Q0 is used for resetting the CFAM of the processor that the ASPEED master is connected to. Reviewed-by: Andrew Jeffery <andrew@aj.id.au> Signed-off-by: Joel Stanley <joel@jms.id.au>
2020-07-24ARM: dts: aspeed: rainier: Add CFAM reset GPIOJoel Stanley
The GPIO on Q0 is used for resetting the CFAM of the processor that the ASPEED master is connected to. The signal is wired as active high on the first pass systems. Reviewed-by: Andrew Jeffery <andrew@aj.id.au> Signed-off-by: Joel Stanley <joel@jms.id.au>
2020-07-24tpm: Require that all digests are present in TCG_PCR_EVENT2 structuresTyler Hicks
Require that the TCG_PCR_EVENT2.digests.count value strictly matches the value of TCG_EfiSpecIdEvent.numberOfAlgorithms in the event field of the TCG_PCClientPCREvent event log header. Also require that TCG_EfiSpecIdEvent.numberOfAlgorithms is non-zero. The TCG PC Client Platform Firmware Profile Specification section 9.1 (Family "2.0", Level 00 Revision 1.04) states: For each Hash algorithm enumerated in the TCG_PCClientPCREvent entry, there SHALL be a corresponding digest in all TCG_PCR_EVENT2 structures. Note: This includes EV_NO_ACTION events which do not extend the PCR. Section 9.4.5.1 provides this description of TCG_EfiSpecIdEvent.numberOfAlgorithms: The number of Hash algorithms in the digestSizes field. This field MUST be set to a value of 0x01 or greater. Enforce these restrictions, as required by the above specification, in order to better identify and ignore invalid sequences of bytes at the end of an otherwise valid TPM2 event log. Firmware doesn't always have the means necessary to inform the kernel of the actual event log size so the kernel's event log parsing code should be stringent when parsing the event log for resiliency against firmware bugs. This is true, for example, when firmware passes the event log to the kernel via a reserved memory region described in device tree. POWER and some ARM systems use the "linux,sml-base" and "linux,sml-size" device tree properties to describe the memory region used to pass the event log from firmware to the kernel. Unfortunately, the "linux,sml-size" property describes the size of the entire reserved memory region rather than the size of the event long within the memory region and the event log format does not include information describing the size of the event log. tpm_read_log_of(), in drivers/char/tpm/eventlog/of.c, is where the "linux,sml-size" property is used. At the end of that function, log->bios_event_log_end is pointing at the end of the reserved memory region. That's typically 0x10000 bytes offset from "linux,sml-base", depending on what's defined in the device tree source. The firmware event log only fills a portion of those 0x10000 bytes and the rest of the memory region should be zeroed out by firmware. Even in the case of a properly zeroed bytes in the remainder of the memory region, the only thing allowing the kernel's event log parser to detect the end of the event log is the following conditional in __calc_tpm2_event_size(): if (event_type == 0 && event_field->event_size == 0) size = 0; If that wasn't there, __calc_tpm2_event_size() would think that a 16 byte sequence of zeroes, following an otherwise valid event log, was a valid event. However, problems can occur if a single bit is set in the offset corresponding to either the TCG_PCR_EVENT2.eventType or TCG_PCR_EVENT2.eventSize fields, after the last valid event log entry. This could confuse the parser into thinking that an additional entry is present in the event log and exposing this invalid entry to userspace in the /sys/kernel/security/tpm0/binary_bios_measurements file. Such problems have been seen if firmware does not fully zero the memory region upon a warm reboot. This patch significantly raises the bar on how difficult it is for stale/invalid memory to confuse the kernel's event log parser but there's still, ultimately, a reliance on firmware to properly initialize the remainder of the memory region reserved for the event log as the parser cannot be expected to detect a stale but otherwise properly formatted firmware event log entry. Fixes: fd5c78694f3f ("tpm: fix handling of the TPM 2.0 event logs") Signed-off-by: Tyler Hicks <tyhicks@linux.microsoft.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>