summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-05-19sched/fair: Fix enqueue_task_fair() warning some morePhil Auld
sched/fair: Fix enqueue_task_fair warning some more The recent patch, fe61468b2cb (sched/fair: Fix enqueue_task_fair warning) did not fully resolve the issues with the rq->tmp_alone_branch != &rq->leaf_cfs_rq_list warning in enqueue_task_fair. There is a case where the first for_each_sched_entity loop exits due to on_rq, having incompletely updated the list. In this case the second for_each_sched_entity loop can further modify se. The later code to fix up the list management fails to do what is needed because se does not point to the sched_entity which broke out of the first loop. The list is not fixed up because the throttled parent was already added back to the list by a task enqueue in a parallel child hierarchy. Address this by calling list_add_leaf_cfs_rq if there are throttled parents while doing the second for_each_sched_entity loop. Fixes: fe61468b2cb ("sched/fair: Fix enqueue_task_fair warning") Suggested-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Phil Auld <pauld@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20200512135222.GC2201@lorien.usersys.redhat.com
2020-05-19x86/mmiotrace: Use cpumask_available() for cpumask_var_t variablesNathan Chancellor
When building with Clang + -Wtautological-compare and CONFIG_CPUMASK_OFFSTACK unset: arch/x86/mm/mmio-mod.c:375:6: warning: comparison of array 'downed_cpus' equal to a null pointer is always false [-Wtautological-pointer-compare] if (downed_cpus == NULL && ^~~~~~~~~~~ ~~~~ arch/x86/mm/mmio-mod.c:405:6: warning: comparison of array 'downed_cpus' equal to a null pointer is always false [-Wtautological-pointer-compare] if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0) ^~~~~~~~~~~ ~~~~ 2 warnings generated. Commit f7e30f01a9e2 ("cpumask: Add helper cpumask_available()") added cpumask_available() to fix warnings of this nature. Use that here so that clang does not warn regardless of CONFIG_CPUMASK_OFFSTACK's value. Reported-by: Sedat Dilek <sedat.dilek@gmail.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Link: https://github.com/ClangBuiltLinux/linux/issues/982 Link: https://lkml.kernel.org/r/20200408205323.44490-1-natechancellor@gmail.com
2020-05-19dmaengine: tegra210-adma: Fix an error handling path in 'tegra_adma_probe()'Christophe JAILLET
Commit b53611fb1ce9 ("dmaengine: tegra210-adma: Fix crash during probe") has moved some code in the probe function and reordered the error handling path accordingly. However, a goto has been missed. Fix it and goto the right label if 'dma_async_device_register()' fails, so that all resources are released. Fixes: b53611fb1ce9 ("dmaengine: tegra210-adma: Fix crash during probe") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20200516214205.276266-1-christophe.jaillet@wanadoo.fr Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-05-19fscrypt: add support for IV_INO_LBLK_32 policiesEric Biggers
The eMMC inline crypto standard will only specify 32 DUN bits (a.k.a. IV bits), unlike UFS's 64. IV_INO_LBLK_64 is therefore not applicable, but an encryption format which uses one key per policy and permits the moving of encrypted file contents (as f2fs's garbage collector requires) is still desirable. To support such hardware, add a new encryption format IV_INO_LBLK_32 that makes the best use of the 32 bits: the IV is set to 'SipHash-2-4(inode_number) + file_logical_block_number mod 2^32', where the SipHash key is derived from the fscrypt master key. We hash only the inode number and not also the block number, because we need to maintain contiguity of DUNs to merge bios. Unlike with IV_INO_LBLK_64, with this format IV reuse is possible; this is unavoidable given the size of the DUN. This means this format should only be used where the requirements of the first paragraph apply. However, the hash spreads out the IVs in the whole usable range, and the use of a keyed hash makes it difficult for an attacker to determine which files use which IVs. Besides the above differences, this flag works like IV_INO_LBLK_64 in that on ext4 it is only allowed if the stable_inodes feature has been enabled to prevent inode numbers and the filesystem UUID from changing. Link: https://lore.kernel.org/r/20200515204141.251098-1-ebiggers@kernel.org Reviewed-by: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Paul Crowley <paulcrowley@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com>
2020-05-19ARM: decompressor: run decompressor in place if loaded via UEFIArd Biesheuvel
The decompressor can load from anywhere in memory, and the only reason the EFI stub code relocates it is to ensure it appears within the first 128 MiB of memory, so that the uncompressed kernel ends up at the right offset in memory. We can short circuit this, and simply jump into the decompressor startup code at the point where it knows where the base of memory lives. This also means there is no need to disable the MMU and caches, create new page tables and re-enable them. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
2020-05-19ARM: decompressor: move GOT into .data for EFI enabled buildsArd Biesheuvel
We will be running the decompressor in place after a future patch, instead of copying it around first. This means we no longer have to disable and re-enable the MMU and caches either. However, this means we will be loaded with the restricted permissions set by the UEFI firmware, which means that we have to move the GOT table into the data section in order for the contents to be writable by the code itself. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
2020-05-19ARM: decompressor: defer loading of the contents of the LC0 structureArd Biesheuvel
The remaining contents of LC0 are only used after the point in the decompressor startup code where we enter via 'wont_overwrite'. So move the loading of the LC0 structure after it. This will allow us to jump to wont_overwrite directly from the EFI stub, and execute the decompressor in place at the offset it was loaded by the UEFI firmware. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
2020-05-19ARM: decompressor: split off _edata and stack base into separate objectArd Biesheuvel
In preparation of moving the handling of the LC0 object to a later stage in the decompressor startup code, move out _edata and the initial value of the stack pointer, which are needed earlier than the remaining contents of LC0. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
2020-05-19x86/audit: Fix a -Wmissing-prototypes warning for ia32_classify_syscall()Benjamin Thiel
Lift the prototype of ia32_classify_syscall() into its own header. Signed-off-by: Benjamin Thiel <b.thiel@posteo.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200516123816.2680-1-b.thiel@posteo.de
2020-05-19driver core: Fix SYNC_STATE_ONLY device link implementationSaravana Kannan
When SYNC_STATE_ONLY support was added in commit 05ef983e0d65 ("driver core: Add device link support for SYNC_STATE_ONLY flag"), device_link_add() incorrectly skipped adding the new SYNC_STATE_ONLY device link to the supplier's and consumer's "device link" list. This causes multiple issues: - The device link is lost forever from driver core if the caller didn't keep track of it (caller typically isn't expected to). This is a memory leak. - The device link is also never visible to any other code path after device_link_add() returns. If we fix the "device link" list handling, that exposes a bunch of issues. 1. The device link "status" state management code rightfully doesn't handle the case where a DL_FLAG_MANAGED device link exists between a supplier and consumer, but the consumer manages to probe successfully before the supplier. The addition of DL_FLAG_SYNC_STATE_ONLY links break this assumption. This causes device_links_driver_bound() to throw a warning when this happens. Since DL_FLAG_SYNC_STATE_ONLY device links are mainly used for creating proxy device links for child device dependencies and aren't useful once the consumer device probes successfully, this patch just deletes DL_FLAG_SYNC_STATE_ONLY device links once its consumer device probes. This way, we avoid the warning, free up some memory and avoid complicating the device links "status" state management code. 2. Creating a DL_FLAG_STATELESS device link between two devices that already have a DL_FLAG_SYNC_STATE_ONLY device link will result in the DL_FLAG_STATELESS flag not getting set correctly. This patch also fixes this. Lastly, this patch also fixes minor whitespace issues. Cc: stable@vger.kernel.org Fixes: 05ef983e0d65 ("driver core: Add device link support for SYNC_STATE_ONLY flag") Signed-off-by: Saravana Kannan <saravanak@google.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/r/20200519063000.128819-1-saravanak@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-05-19ARM: decompressor: move headroom variable out of LC0Ard Biesheuvel
Before breaking up LC0 into different pieces, move out the variable that is already place-relative (given that it subtracts 'restart' in the expression) and so its value does not need to be added to the runtime address of the LC0 symbol itself. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
2020-05-19kprobes: Prevent probes in .noinstr.text sectionThomas Gleixner
Instrumentation is forbidden in the .noinstr.text section. Make kprobes respect this. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20200505134100.179862032@linutronix.de
2020-05-19Merge tag 'noinstr-lds-2020-05-19' into core/kprobesThomas Gleixner
Get the noinstr section and markers to base the kprobe changes on.
2020-05-19rcu: Provide __rcu_is_watching()Thomas Gleixner
Same as rcu_is_watching() but without the preempt_disable/enable() pair inside the function. It is merked noinstr so it ends up in the non-instrumentable text section. This is useful for non-preemptible code especially in the low level entry section. Using rcu_is_watching() there results in a call to the preempt_schedule_notrace() thunk which triggers noinstr section warnings in objtool. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200512213810.518709291@linutronix.de
2020-05-19rcu: Provide rcu_irq_exit_preempt()Thomas Gleixner
Interrupts and exceptions invoke rcu_irq_enter() on entry and need to invoke rcu_irq_exit() before they either return to the interrupted code or invoke the scheduler due to preemption. The general assumption is that RCU idle code has to have preemption disabled so that a return from interrupt cannot schedule. So the return from interrupt code invokes rcu_irq_exit() and preempt_schedule_irq(). If there is any imbalance in the rcu_irq/nmi* invocations or RCU idle code had preemption enabled then this goes unnoticed until the CPU goes idle or some other RCU check is executed. Provide rcu_irq_exit_preempt() which can be invoked from the interrupt/exception return code in case that preemption is enabled. It invokes rcu_irq_exit() and contains a few sanity checks in case that CONFIG_PROVE_RCU is enabled to catch such issues directly. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200505134904.364456424@linutronix.de
2020-05-19rcu: Make RCU IRQ enter/exit functions rely on in_nmi()Paul E. McKenney
The rcu_nmi_enter_common() and rcu_nmi_exit_common() functions take an "irq" parameter that indicates whether these functions have been invoked from an irq handler (irq==true) or an NMI handler (irq==false). However, recent changes have applied notrace to a few critical functions such that rcu_nmi_enter_common() and rcu_nmi_exit_common() many now rely on in_nmi(). Note that in_nmi() works no differently than before, but rather that tracing is now prohibited in code regions where in_nmi() would incorrectly report NMI state. Therefore remove the "irq" parameter and inline rcu_nmi_enter_common() and rcu_nmi_exit_common() into rcu_nmi_enter() and rcu_nmi_exit(), respectively. Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/20200505134101.617130349@linutronix.de
2020-05-19rcu/tree: Mark the idle relevant functions noinstrThomas Gleixner
These functions are invoked from context tracking and other places in the low level entry code. Move them into the .noinstr.text section to exclude them from instrumentation. Mark the places which are safe to invoke traceable functions with instrumentation_begin/end() so objtool won't complain. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Paul E. McKenney <paulmck@kernel.org> Link: https://lkml.kernel.org/r/20200505134100.575356107@linutronix.de
2020-05-19x86: Replace ist_enter() with nmi_enter()Peter Zijlstra
A few exceptions (like #DB and #BP) can happen at any location in the code, this then means that tracers should treat events from these exceptions as NMI-like. The interrupted context could be holding locks with interrupts disabled for instance. Similarly, #MC is an actual NMI-like exception. All of them use ist_enter() which only concerns itself with RCU, but does not do any of the other setup that NMIs need. This means things like: printk() raw_spin_lock_irq(&logbuf_lock); <#DB/#BP/#MC> printk() raw_spin_lock_irq(&logbuf_lock); are entirely possible (well, not really since printk tries hard to play nice, but the concept stands). So replace ist_enter() with nmi_enter(). Also observe that any nmi_enter() caller must be both notrace and NOKPROBE, or in the noinstr text section. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/20200505134101.525508608@linutronix.de
2020-05-19x86/mce: Send #MC singal from task workPeter Zijlstra
Convert #MC over to using task_work_add(); it will run the same code slightly later, on the return to user path of the same exception. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/20200505134100.957390899@linutronix.de
2020-05-19x86/entry: Get rid of ist_begin/end_non_atomic()Thomas Gleixner
This is completely overengineered and definitely not an interface which should be made available to anything else than this particular MCE case. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200505134059.462640294@linutronix.de
2020-05-19sched,rcu,tracing: Avoid tracing before in_nmi() is correctPeter Zijlstra
If a tracer is invoked before in_nmi() becomes true, the tracer can no longer detect it is called from NMI context and behave correctly. Therefore change nmi_{enter,exit}() to use __preempt_count_{add,sub}() as the normal preempt_count_{add,sub}() have a (desired) function trace entry. This fixes a potential issue with the current code; when the function-tracer has stack-tracing enabled __trace_stack() will malfunction when it hits the preempt_count_add() function entry from NMI context. Suggested-by: Steven Rostedt (VMware) <rosted@goodmis.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/20200505134101.434193525@linutronix.de
2020-05-19sh/ftrace: Move arch_ftrace_nmi_{enter,exit} into nmi exceptionPeter Zijlstra
SuperH is the last remaining user of arch_ftrace_nmi_{enter,exit}(), remove it from the generic code and into the SuperH code. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Rich Felker <dalias@libc.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200505134101.248881738@linutronix.de
2020-05-19lockdep: Always inline lockdep_{off,on}()Peter Zijlstra
These functions are called {early,late} in nmi_{enter,exit} and should not be traced or probed. They are also puny, so 'inline' them. Reported-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/20200505134101.048523500@linutronix.de
2020-05-19hardirq/nmi: Allow nested nmi_enter()Peter Zijlstra
Since there are already a number of sites (ARM64, PowerPC) that effectively nest nmi_enter(), make the primitive support this before adding even more. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Marc Zyngier <maz@kernel.org> Acked-by: Will Deacon <will@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Link: https://lkml.kernel.org/r/20200505134100.864179229@linutronix.de
2020-05-19arm64: Prepare arch_nmi_enter() for recursionFrederic Weisbecker
When using nmi_enter() recursively, arch_nmi_enter() must also be recursion safe. In particular, it must be ensured that HCR_TGE is always set while in NMI context when in HYP mode, and be restored to it's former state when done. The current code fails this when interleaved wrong. Notably it overwrites the original hcr state on nesting. Introduce a nesting counter to make sure to store the original value. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lkml.kernel.org/r/20200505134100.771491291@linutronix.de
2020-05-19printk: Disallow instrumenting print_nmi_enter()Peter Zijlstra
It happens early in nmi_enter(), no tracing, probing or other funnies allowed. Specifically as nmi_enter() will be used in do_debug(), which would cause recursive exceptions when kprobed. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/20200505134101.139720912@linutronix.de
2020-05-19printk: Prepare for nested printk_nmi_enter()Petr Mladek
There is plenty of space in the printk_context variable. Reserve one byte there for the NMI context to be on the safe side. It should never overflow. The BUG_ON(in_nmi() == NMI_MASK) in nmi_enter() will trigger much earlier. Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/20200505134100.681374113@linutronix.de
2020-05-19Merge tag 'noinstr-lds-2020-05-19' into core/rcuThomas Gleixner
Get the noinstr section and annotation markers to base the RCU parts on.
2020-05-19vmlinux.lds.h: Create section for protection against instrumentationThomas Gleixner
Some code pathes, especially the low level entry code, must be protected against instrumentation for various reasons: - Low level entry code can be a fragile beast, especially on x86. - With NO_HZ_FULL RCU state needs to be established before using it. Having a dedicated section for such code allows to validate with tooling that no unsafe functions are invoked. Add the .noinstr.text section and the noinstr attribute to mark functions. noinstr implies notrace. Kprobes will gain a section check later. Provide also a set of markers: instrumentation_begin()/end() These are used to mark code inside a noinstr function which calls into regular instrumentable text section as safe. The instrumentation markers are only active when CONFIG_DEBUG_ENTRY is enabled as the end marker emits a NOP to prevent the compiler from merging the annotation points. This means the objtool verification requires a kernel compiled with this option. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200505134100.075416272@linutronix.de
2020-05-19spi: ti_qspi: fix unit addressKangmin Park
Fix unit address to match the first address specified in the reg property of the node in example. Signed-off-by: Kangmin Park <l4stpr0gr4m@gmail.com> Link: https://lore.kernel.org/r/1589873541-5587-1-git-send-email-l4stpr0gr4m@gmail.com Signed-off-by: Mark Brown <broonie@kernel.org>
2020-05-19iommu: Fix deferred domain attachmentJoerg Roedel
The IOMMU core code has support for deferring the attachment of a domain to a device. This is needed in kdump kernels where the new domain must not be attached to a device before the device driver takes it over. When the AMD IOMMU driver got converted to use the dma-iommu implementation, the deferred attaching got lost. The code in dma-iommu.c has support for deferred attaching, but it calls into iommu_attach_device() to actually do it. But iommu_attach_device() will check if the device should be deferred in it code-path and do nothing, breaking deferred attachment. Move the is_deferred_attach() check out of the attach_device path and into iommu_group_add_device() to make deferred attaching work from the dma-iommu code. Fixes: 795bbbb9b6f8 ("iommu/dma-iommu: Handle deferred devices") Reported-by: Jerry Snitselaar <jsnitsel@redhat.com> Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Jerry Snitselaar <jsnitsel@redhat.com> Cc: Jerry Snitselaar <jsnitsel@redhat.com> Cc: Tom Murphy <murphyt7@tcd.ie> Cc: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200519130340.14564-1-joro@8bytes.org
2020-05-19x86/boot: Correct relocation destination on old linkersArvind Sankar
For the 32-bit kernel, as described in 6d92bc9d483a ("x86/build: Build compressed x86 kernels as PIE"), pre-2.26 binutils generates R_386_32 relocations in PIE mode. Since the startup code does not perform relocation, any reloc entry with R_386_32 will remain as 0 in the executing code. Commit 974f221c84b0 ("x86/boot: Move compressed kernel to the end of the decompression buffer") added a new symbol _end but did not mark it hidden, which doesn't give the correct offset on older linkers. This causes the compressed kernel to be copied beyond the end of the decompression buffer, rather than flush against it. This region of memory may be reserved or already allocated for other purposes by the bootloader. Mark _end as hidden to fix. This changes the relocation from R_386_32 to R_386_RELATIVE even on the pre-2.26 binutils. For 64-bit, this is not strictly necessary, as the 64-bit kernel is only built as PIE if the linker supports -z noreloc-overflow, which implies binutils-2.27+, but for consistency, mark _end as hidden here too. The below illustrates the before/after impact of the patch using binutils-2.25 and gcc-4.6.4 (locally compiled from source) and QEMU. Disassembly before patch: 48: 8b 86 60 02 00 00 mov 0x260(%esi),%eax 4e: 2d 00 00 00 00 sub $0x0,%eax 4f: R_386_32 _end Disassembly after patch: 48: 8b 86 60 02 00 00 mov 0x260(%esi),%eax 4e: 2d 00 f0 76 00 sub $0x76f000,%eax 4f: R_386_RELATIVE *ABS* Dump from extract_kernel before patch: early console in extract_kernel input_data: 0x0207c098 <--- this is at output + init_size input_len: 0x0074fef1 output: 0x01000000 output_len: 0x00fa63d0 kernel_total_size: 0x0107c000 needed_size: 0x0107c000 Dump from extract_kernel after patch: early console in extract_kernel input_data: 0x0190d098 <--- this is at output + init_size - _end input_len: 0x0074fef1 output: 0x01000000 output_len: 0x00fa63d0 kernel_total_size: 0x0107c000 needed_size: 0x0107c000 Fixes: 974f221c84b0 ("x86/boot: Move compressed kernel to the end of the decompression buffer") Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200207214926.3564079-1-nivedita@alum.mit.edu
2020-05-19ARM: 8976/1: module: allow arch overrides for .init section namesVincent Whitchurch
ARM stores unwind information for .init.text in sections named .ARM.extab.init.text and .ARM.exidx.init.text. Since those aren't currently recognized as init sections, they're allocated along with the core section, and relocation fails if the core and the init section are allocated from different regions and can't reach other. final section addresses: ... 0x7f800000 .init.text .. 0xcbb54078 .ARM.exidx.init.text .. section 16 reloc 0 sym '': relocation 42 out of range (0xcbb54078 -> 0x7f800000) Allow architectures to override the section name so that ARM can fix this. Acked-by: Jessica Yu <jeyu@kernel.org> Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-19ARM: 8975/1: module: fix handling of unwind init sectionsVincent Whitchurch
Unwind information for init sections is placed in .ARM.exidx.init.text and .ARM.extab.init.text. The module core doesn't know that these are init sections so they are allocated along with the core sections, and if the core and init sections get allocated in different memory regions (which is possible with CONFIG_ARM_MODULE_PLTS=y) and they can't reach each other, relocation fails: final section addresses: ... 0x7f800000 .init.text .. 0xcbb54078 .ARM.exidx.init.text .. section 16 reloc 0 sym '': relocation 42 out of range (0xcbb54078 -> 0x7f800000) Fix this by informing the module core that these sections are init sections, and by removing the init unwind tables before the module core frees the init sections. Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-19ARM: 8977/1: ptrace: Fix mask for thumb breakpoint hookFredrik Strupe
call_undef_hook() in traps.c applies the same instr_mask for both 16-bit and 32-bit thumb instructions. If instr_mask then is only 16 bits wide (0xffff as opposed to 0xffffffff), the first half-word of 32-bit thumb instructions will be masked out. This makes the function match 32-bit thumb instructions where the second half-word is equal to instr_val, regardless of the first half-word. The result in this case is that all undefined 32-bit thumb instructions with the second half-word equal to 0xde01 (udf #1) work as breakpoints and will raise a SIGTRAP instead of a SIGILL, instead of just the one intended 16-bit instruction. An example of such an instruction is 0xeaa0de01, which is unallocated according to Arm ARM and should raise a SIGILL, but instead raises a SIGTRAP. This patch fixes the issue by setting all the bits in instr_mask, which will still match the intended 16-bit thumb instruction (where the upper half is always 0), but not any 32-bit thumb instructions. Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Fredrik Strupe <fredrik@strupe.net> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-19ARM: 8974/1: use SPARSMEM_STATIC when SPARSEMEM is enabledMike Rapoport
The commit 3e347261a80b5 ("[PATCH] sparsemem extreme implementation") made SPARSMEM_EXTREME the default option for configurations that enable SPARSEMEM. For ARM systems with handful of memory banks SPARSEMEM_EXTREME is an overkill. Ensure that SPARSMEM_STATIC is enabled in the configurations that use SPARSEMEM. Fixes: 3e347261a80b5 ("[PATCH] sparsemem extreme implementation") Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-19drm/etnaviv: Fix a leak in submit_pin_objects()Dan Carpenter
If the mapping address is wrong then we have to release the reference to it before returning -EINVAL. Fixes: 088880ddc0b2 ("drm/etnaviv: implement softpin") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
2020-05-19drm/etnaviv: fix perfmon domain interationChristian Gmeiner
The GC860 has one GPU device which has a 2d and 3d core. In this case we want to expose perfmon information for both cores. The driver has one array which contains all possible perfmon domains with some meta data - doms_meta. Here we can see that for the GC860 two elements of that array are relevant: doms_3d: is at index 0 in the doms_meta array with 8 perfmon domains doms_2d: is at index 1 in the doms_meta array with 1 perfmon domain The userspace driver wants to get a list of all perfmon domains and their perfmon signals. This is done by iterating over all domains and their signals. If the userspace driver wants to access the domain with id 8 the kernel driver fails and returns invalid data from doms_3d with and invalid offset. This results in: Unable to handle kernel paging request at virtual address 00000000 On such a device it is not possible to use the userspace driver at all. The fix for this off-by-one error is quite simple. Reported-by: Paul Cercueil <paul@crapouillou.net> Tested-by: Paul Cercueil <paul@crapouillou.net> Fixes: ed1dd899baa3 ("drm/etnaviv: rework perfmon query infrastructure") Cc: stable@vger.kernel.org Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com> Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
2020-05-19efi/printf: Turn vsprintf into vsnprintfArvind Sankar
Implement vsnprintf instead of vsprintf to avoid the possibility of a buffer overflow. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-17-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/printf: Abort on invalid formatArvind Sankar
If we get an invalid conversion specifier, bail out instead of trying to fix it up. The format string likely has a typo or assumed we support something that we don't, in either case the remaining arguments won't match up with the remaining format string. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-16-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/printf: Refactor code to consolidate padding and outputArvind Sankar
Consolidate the actual output of the formatted text into one place. Fix a couple of edge cases: 1. If 0 is printed with a precision of 0, the printf specification says that nothing should be output, with one exception (2b). 2. The specification for octal alternate format (%#o) adds the leading zero not as a prefix as the 0x for hexadecimal is, but by increasing the precision if necessary to add the zero. This means that a. %#.2o turns 8 into "010", but 1 into "01" rather than "001". b. %#.o prints 0 as "0" rather than "", unlike the situation for decimal, hexadecimal and regular octal format, which all output an empty string. Reduce the space allocated for printing a number to the maximum actually required (22 bytes for a 64-bit number in octal), instead of the 66 bytes previously allocated. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-15-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/printf: Handle null string inputArvind Sankar
Print "(null)" for 's' if the input is a NULL pointer. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-14-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/printf: Factor out integer argument retrievalArvind Sankar
Factor out the code to get the correct type of numeric argument into a helper function. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-13-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/printf: Factor out width/precision parsingArvind Sankar
Factor out the width/precision parsing into a helper function. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-12-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/printf: Merge 'p' with the integer formatsArvind Sankar
Treat 'p' as a hexadecimal integer with precision equal to the number of digits in void *. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-11-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/printf: Fix minor bug in precision handlingArvind Sankar
A negative precision should be ignored completely, and the presence of a valid precision should turn off the 0 flag. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-10-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/printf: Factor out flags parsing and handle '%' earlierArvind Sankar
Move flags parsing code out into a helper function. The '%%' case can be handled up front: it is not allowed to have flags, width etc. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-9-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/printf: Add 64-bit and 8-bit integer supportArvind Sankar
Support 'll' qualifier for long long by copying the decimal printing code from lib/vsprintf.c. For simplicity, the 32-bit code is used on 64-bit architectures as well. Support 'hh' qualifier for signed/unsigned char type integers. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-8-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/printf: Drop %n format and L qualifierArvind Sankar
%n is unused and deprecated. The L qualifer is parsed but not actually implemented. Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-7-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-05-19efi/libstub: Optimize for size instead of speedArvind Sankar
Reclaim the bloat from the addition of printf by optimizing the stub for size. With gcc 9, the text size of the stub is: ARCH before +printf -Os arm 35197 37889 34638 arm64 34883 38159 34479 i386 18571 21657 17025 x86_64 25677 29328 22144 Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu> Link: https://lore.kernel.org/r/20200518190716.751506-6-nivedita@alum.mit.edu Signed-off-by: Ard Biesheuvel <ardb@kernel.org>