summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2017-07-01Merge tag 'perf-core-for-mingo-4.13-20170630' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo: Intel PT enhancements: - Support "ptwrite" instruction, a way to stuff 32 or 64 bit values into the Intel PT trace (Adrian Hunter) - Support power events in Intel PT to report changes to C-state (Adrian Hunter) - Synthesize Intel PT events as PERF_RECORD_SAMPLE records with a perf_event_attr.type (PERF_TYPE_SYNTH) just after the range used by the kernel, i.e. right after what is allocated for PMUs, at INT_MAX + 1U, attr.config will have the identification for the synthesized event and the PERF_SAMPLE_RAW payload will have its fields (Adrian Hunter) Infrastructure changes: - Remove warning() and error(), using instead pr_warning() and pr_error(), consolidating error reporting (Arnaldo Carvalho de Melo) - Add platform dependency to 'perf test 15' (Thomas Richter) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30x86/intel_rdt: Fix memory leak on mount failureVikas Shivappa
If mount fails, the kn_info directory is not freed causing memory leak. Add the missing error handling path. Fixes: 4e978d06dedb ("x86/intel_rdt: Add "info" files to resctrl file system") Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: ravi.v.shankar@intel.com Cc: tony.luck@intel.com Cc: fenghua.yu@intel.com Cc: peterz@infradead.org Cc: vikas.shivappa@intel.com Cc: andi.kleen@intel.com Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1498503368-20173-3-git-send-email-vikas.shivappa@linux.intel.com
2017-06-30randstruct: opt-out externally exposed function pointer structsKees Cook
Some function pointer structures are used externally to the kernel, like the paravirt structures. These should never be randomized, so mark them as such, in preparation for enabling randstruct's automatic selection of all-function-pointer structures. These markings are verbatim from Brad Spengler/PaX Team's code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Signed-off-by: Kees Cook <keescook@chromium.org>
2017-06-30randstruct: Mark various structs for randomizationKees Cook
This marks many critical kernel structures for randomization. These are structures that have been targeted in the past in security exploits, or contain functions pointers, pointers to function pointer tables, lists, workqueues, ref-counters, credentials, permissions, or are otherwise sensitive. This initial list was extracted from Brad Spengler/PaX Team's code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Left out of this list is task_struct, which requires special handling and will be covered in a subsequent patch. Signed-off-by: Kees Cook <keescook@chromium.org>
2017-06-30ARM: Prepare for randomized task_structArnd Bergmann
With the new task struct randomization, we can run into a build failure for certain random seeds, which will place fields beyond the allow immediate size in the assembly: arch/arm/kernel/entry-armv.S: Assembler messages: arch/arm/kernel/entry-armv.S:803: Error: bad immediate value for offset (4096) Only two constants in asm-offset.h are affected, and I'm changing both of them here to work correctly in all configurations. One more macro has the problem, but is currently unused, so this removes it instead of adding complexity. Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> [kees: Adjust commit log slightly] Signed-off-by: Kees Cook <keescook@chromium.org>
2017-06-30Merge tag 'powerpc-4.12-8' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: "Hopefully the last two powerpc fixes for 4.12. The CXL one is larger than I'd usually send at rc7, but it fixes new code this cycle, so better to have it working for the release. It was actually sent a few weeks back but got blocked in testing behind another fix that was causing issues. We are still tracking one crash in v4.12-rc7, but only one person has reproduced it and the commit identified by bisect doesn't touch any of the relevant code, so I think it's 50/50 whether that commit is actually the problem or it's some code layout / toolchain issue. Two fixes for code we merged this cycle: - cxl: Fixes for Coherent Accelerator Interface Architecture 2.0 - Avoid miscompilation w/GCC 4.6.3 on 32-bit - don't inline copy_to/from_user() Thanks to Al Viro, Larry Finger, Christophe Lombard" * tag 'powerpc-4.12-8' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/32: Avoid miscompilation w/GCC 4.6.3 - don't inline copy_to/from_user() cxl: Fixes for Coherent Accelerator Interface Architecture 2.0
2017-06-30Merge tag 'iommu-fixes-v4.12-rc7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull IOMMU fixes from Joerg Roedel: "Two fixes: - A fix for AMD IOMMU interrupt remapping code when IRQs are forwarded directly to KVM guests - Fixed check in the recently merged code to allow tboot with Intel VT-d disabled" * tag 'iommu-fixes-v4.12-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: iommu/amd: Fix interrupt remapping when disable guest_mode iommu/vt-d: Correctly disable Intel IOMMU force on
2017-06-30ARM: dma-mapping: Remove traces of NOMMU codeVladimir Murzin
DMA operations for NOMMU case have been just factored out into separate compilation unit, so don't keep dead code. Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Andras Szemzo <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Christoph Hellwig <hch@lst.de>
2017-06-30ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpusVladimir Murzin
Now, we have dedicated non-cacheable region for consistent DMA operations. However, that region can still be marked as bufferable by MPU, so it'd be safer to have barriers by default. M-class machines that didn't need it until now also likely won't need it in the future, therefore, we offer this as an option. Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Andras Szemzo <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Christoph Hellwig <hch@lst.de>
2017-06-30ARM: NOMMU: Introduce dma operations for noMMUVladimir Murzin
R/M classes of cpus can have memory covered by MPU which in turn might configure RAM as Normal i.e. bufferable and cacheable. It breaks dma_alloc_coherent() and friends, since data can stuck in caches now or be buffered. This patch factors out DMA support for NOMMU configuration into separate entity which provides dedicated dma_ops. We have to handle there several cases: - configurations with MMU/MPU setup - configurations without MMU/MPU setup - special case for M-class, since caches and MPU there are optional In general we rely on default DMA area for coherent allocations or/and per-device memory reserves suitable for coherent DMA, so if such regions are set coherent allocations go from there. In case MMU/MPU was not setup we fallback to normal page allocator for DMA memory allocation. In case we run M-class cpus, for configuration without cache support (like Cortex-M3/M4) dma operations are forced to be coherent and wired with dma-noop (such decision is made based on cacheid global variable); however, if caches are detected there and no DMA coherent region is given (either default or per-device), dma is disallowed even MPU is not set - it is because M-class implement system memory map which defines part of address space as Normal memory. Reported-by: Alexandre Torgue <alexandre.torgue@st.com> Reported-by: Andras Szemzo <sza@esh.hu> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: Andras Szemzo <sza@esh.hu> Tested-by: Alexandre TORGUE <alexandre.torgue@st.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> [hch: removed the dma_supported() implementation that isn't required anymore] Signed-off-by: Christoph Hellwig <hch@lst.de>
2017-06-30drivers: dma-mapping: allow dma_common_mmap() for NOMMUVladimir Murzin
Currently, internals of dma_common_mmap() is compiled out if build is done for either NOMMU or target which explicitly says it does not have/want coherent DMA mmap. It turned out that dma_common_mmap() can be handy in NOMMU setup (at least for ARM). This patch converts exitent NOMMU targets to use ARCH_NO_COHERENT_DMA_MMAP, thus when CONFIG_MMU is gone from dma_common_mmap() their behaviour stays unchanged. ARM is not converted to ARCH_NO_COHERENT_DMA_MMAP because it 1) already has mmap callback which can handle (at some extent) NOMMU 2) already defines dummy pgprot_noncached() for NOMMU build. c6x and frv stay untouched since they already have ARCH_NO_COHERENT_DMA_MMAP. Cc: Steven Miao <realmz6@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
2017-06-30Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
A set of overlapping changes in macvlan and the rocker driver, nothing serious. Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-30x86/PCI: Avoid AMD SB7xx EHCI USB wakeup defectKai-Heng Feng
On an AMD Carrizo laptop, when EHCI runtime PM is enabled, EHCI ports do not assert PME# for device plug/unplug events while in D3. As Alan Stern points out [1], the PME signal is not enabled when controller is in D3, therefore it's not being woken up when new devices get plugged in. Testing shows PME signal works when the EHCI power state is D2. Clear the PCI_PM_CAP_PME_D3 and PCI_PM_CAP_PME_D3cold bits in dev->pme_support to indicate the device will not assert PME# from those states. [1] http://lkml.kernel.org/r/Pine.LNX.4.44L0.1706121010010.2092-100000@iolanthe.rowland.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=196091 Link: https://support.amd.com/TechDocs/46837.pdf (Section 23) Link: https://support.amd.com/TechDocs/42413.pdf (Appendix A2) Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com> [bhelgaas: changelog, add parens in quirk] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2017-06-30arm64: fix endianness annotation for 'struct jit_ctx' and friendsLuc Van Oostenryck
struct jit_ctx::image is used the store a pointer to the jitted intructions, which are always little-endian. These instructions are thus correctly converted from native order to little-endian before being stored but the pointer 'image' is declared as for native order values. Fix this by declaring the field as __le32* instead of u32*. Same for the pointer used in jit_fill_hole() to initialize the image. Signed-off-by: Luc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-30arm64: cpuinfo: constify attribute_group structures.Arvind Yadav
attribute_groups are not supposed to change at runtime. All functions working with attribute_groups provided by <linux/sysfs.h> work with const attribute_group. So mark the non-const structs as const. Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-30KVM: x86: remove ignored type attributeNick Desaulniers
The macro insn_fetch marks the 'type' argument as having a specified alignment. Type attributes can only be applied to structs, unions, or enums, but insn_fetch is only ever invoked with integral types, so Clang produces 19 -Wignored-attributes warnings for this source file. Signed-off-by: Nick Desaulniers <nick.desaulniers@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-30Merge tag 'kvmarm-for-4.13' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/ARM updates for 4.13 - vcpu request overhaul - allow timer and PMU to have their interrupt number selected from userspace - workaround for Cavium erratum 30115 - handling of memory poisonning - the usual crop of fixes and cleanups Conflicts: arch/s390/include/asm/kvm_host.h
2017-06-30ARM: OMAP4: Fix legacy code clean-up regressionTony Lindgren
Commit 2a26d31b1bae ("ARM: OMAP2+: Remove unused legacy code for PRM") removed PRM platform init code that I thought is unused. Turns out omap4 still needs this code, so let's do a partial revert to add it back. I probably missed this earlier as the comments used to say "OMAP4+ is DT only now" for !of_have_populated_dt() to exit early and missed the negative test. Let's not add those lines back as they are confusing and no longer needed as we only boot in device tree mode. Without things things can mysterious fail for i2c, for example LM75 I2C temperature sensor can stop working as the PRM interrupts won't work. Fixes: 2a26d31b1bae ("ARM: OMAP2+: Remove unused legacy code for PRM") Signed-off-by: Tony Lindgren <tony@atomide.com>
2017-06-30objtool, x86: Add several functions and files to the objtool whitelistJosh Poimboeuf
In preparation for an objtool rewrite which will have broader checks, whitelist functions and files which cause problems because they do unusual things with the stack. These whitelists serve as a TODO list for which functions and files don't yet have undwarf unwinder coverage. Eventually most of the whitelists can be removed in favor of manual CFI hint annotations or objtool improvements. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: live-patching@vger.kernel.org Link: http://lkml.kernel.org/r/7f934a5d707a574bda33ea282e9478e627fb1829.1498659915.git.jpoimboe@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30x86/mm: Delete a big outdated comment about TLB flushingAndy Lutomirski
The comment describes the old explicit IPI-based flush logic, which is long gone. Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/55e44997e56086528140c5180f8337dc53fb7ffc.1498751203.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30x86/mm: Don't reenter flush_tlb_func_common()Andy Lutomirski
It was historically possible to have two concurrent TLB flushes targetting the same CPU: one initiated locally and one initiated remotely. This can now cause an OOPS in leave_mm() at arch/x86/mm/tlb.c:47: if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK) BUG(); with this call trace: flush_tlb_func_local arch/x86/mm/tlb.c:239 [inline] flush_tlb_mm_range+0x26d/0x370 arch/x86/mm/tlb.c:317 Without reentrancy, this OOPS is impossible: leave_mm() is only called if we're not in TLBSTATE_OK, but then we're unexpectedly in TLBSTATE_OK in leave_mm(). This can be caused by flush_tlb_func_remote() happening between the two checks and calling leave_mm(), resulting in two consecutive leave_mm() calls on the same CPU with no intervening switch_mm() calls. We never saw this OOPS before because the old leave_mm() implementation didn't put us back in TLBSTATE_OK, so the assertion didn't fire. Nadav noticed the reentrancy issue in a different context, but neither of us realized that it caused a problem yet. Reported-by: Levin, Alexander (Sasha Levin) <alexander.levin@verizon.com> Signed-off-by: Andy Lutomirski <luto@kernel.org> Reviewed-by: Nadav Amit <nadav.amit@gmail.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: linux-mm@kvack.org Fixes: 3d28ebceaffa ("x86/mm: Rework lazy TLB to track the actual loaded mm") Link: http://lkml.kernel.org/r/855acf733268d521c9f2e191faee2dcc23a29729.1498751203.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30x86/uaccess: Optimize copy_user_enhanced_fast_string() for short stringsPaolo Abeni
According to the Intel datasheet, the REP MOVSB instruction exposes a pretty heavy setup cost (50 ticks), which hurts short string copy operations. This change tries to avoid this cost by calling the explicit loop available in the unrolled code for strings shorter than 64 bytes. The 64 bytes cutoff value is arbitrary from the code logic point of view - it has been selected based on measurements, as the largest value that still ensures a measurable gain. Micro benchmarks of the __copy_from_user() function with lengths in the [0-63] range show this performance gain (shorter the string, larger the gain): - in the [55%-4%] range on Intel Xeon(R) CPU E5-2690 v4 - in the [72%-9%] range on Intel Core i7-4810MQ Other tested CPUs - namely Intel Atom S1260 and AMD Opteron 8216 - show no difference, because they do not expose the ERMS feature bit. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/4533a1d101fd460f80e21329a34928fad521c1d4.1498744345.git.pabeni@redhat.com [ Clarified the changelog. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30perf/x86/intel: Constify the 'lbr_desc[]' array and make a function staticColin Ian King
A few minor clean-ups: constify the lbr_desc[] array and make local function lbr_from_signext_quirk_rd() static to fix a sparse warning: "symbol 'lbr_from_signext_quirk_rd' was not declared. Should it be static?" Signed-off-by: Colin Ian King <colin.king@canonical.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-janitors@vger.kernel.org Link: http://lkml.kernel.org/r/20170629091406.9870-1-colin.king@canonical.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30x86/KASLR: Fix detection 32/64 bit bootloaders for 5-level pagingKirill A. Shutemov
KASLR uses hack to detect whether we booted via startup_32() or startup_64(): it checks what is loaded into cr3 and compares it to _pgtables. _pgtables is the array of page tables where early code allocates page table from. KASLR expects cr3 to point to _pgtables if we booted via startup_32(), but that's not true if we booted with 5-level paging enabled. In this case top level page table is allocated separately and only the first p4d page table is allocated from the array. Let's modify the check to cover both 4- and 5-level paging cases. The patch also renames 'level4p' to 'top_level_pgt' as it now can hold page table for 4th or 5th level, depending on configuration. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Kees Cook <keescook@chromium.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arch@vger.kernel.org Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20170628121730.43079-1-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30x86/boot/KASLR: Fix kexec crash due to 'virt_addr' calculation bugBaoquan He
Kernel text KASLR is separated into physical address and virtual address randomization. And for virtual address randomization, we only randomiza to get an offset between 16M and KERNEL_IMAGE_SIZE. So the initial value of 'virt_addr' should be LOAD_PHYSICAL_ADDR, but not the original kernel loading address 'output'. The bug will cause kernel boot failure if kernel is loaded at a different position than the address, 16M, which is decided at compiled time. Kexec/kdump is such practical case. To fix it, just assign LOAD_PHYSICAL_ADDR to virt_addr as initial value. Tested-by: Dave Young <dyoung@redhat.com> Signed-off-by: Baoquan He <bhe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 8391c73 ("x86/KASLR: Randomize virtual address separately") Link: http://lkml.kernel.org/r/1498567146-11990-3-git-send-email-bhe@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30x86/boot/KASLR: Add checking for the offset of kernel virtual address ↵Baoquan He
randomization For kernel text KASLR, the virtual address is confined to area of 1G, [0xffffffff80000000, 0xffffffffc0000000). For the implemenataion of virtual address randomization, we only randomize to get an offset between 16M and 1G, then add this offset to the starting address, 0xffffffff80000000. Here 16M is the offset which is decided at linking stage. So the amount of the local variable 'virt_addr' which respresents the offset plus the kernel output size can not exceed KERNEL_IMAGE_SIZE. Add a debug check for the offset. If out of bounds, print error message and hang there. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Baoquan He <bhe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1498567146-11990-2-git-send-email-bhe@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-29ARM: OMAP2+: Fix omap3 prm shared irqTony Lindgren
Shared interrupts with IRQ_NOAUTOEN got a warning added with commit 04c848d39879 ("genirq: Warn when IRQ_NOAUTOEN is used with shared interrupts"). Let's just drop the IRQ_NOAUTOEN use for omap3 PRM shared interrupt as it does not seem to cause any other issues based on my testing. We have moved a lot of the code to initialize later, and whatever problems the legacy booting had seem to be gone now with pinctrl driver and device tree based booting. Otherwise we will get: WARNING: CPU: 0 PID: 1 at kernel/irq/manage.c:1348 __setup_irq+0x5d0/0x64c [<c01b0260>] (__setup_irq) from [<c01b0480>] (request_threaded_irq+0xdc/0x188) [<c01b0480>] (request_threaded_irq) from [<c051c780>] (pcs_probe+0x6ec/0x8a4) [<c051c780>] (pcs_probe) from [<c05a84b8>] (platform_drv_probe+0x50/0xb0) [<c05a84b8>] (platform_drv_probe) from [<c05a6288>] (driver_probe_device+0x33c/0x478) Note that we also need to remove the related enable_irq() to avoid getting the following: WARNING: CPU: 0 PID: 1 at kernel/irq/manage.c:529 enable_irq+0x34/0x70 [<c01afa04>] (enable_irq) from [<c0c0f1fc>] (omap3_pm_init+0x118/0x3f8) [<c0c0f1fc>] (omap3_pm_init) from [<c0c0ae7c>] (am35xx_init_late+0x10/0x18) Cc: Kevin Hilman <khilman@baylibre.com> Cc: Tero Kristo <t-kristo@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2017-06-30MIPS: Avoid accidental raw backtraceJames Hogan
Since commit 81a76d7119f6 ("MIPS: Avoid using unwind_stack() with usermode") show_backtrace() invokes the raw backtracer when cp0_status & ST0_KSU indicates user mode to fix issues on EVA kernels where user and kernel address spaces overlap. However this is used by show_stack() which creates its own pt_regs on the stack and leaves cp0_status uninitialised in most of the code paths. This results in the non deterministic use of the raw back tracer depending on the previous stack content. show_stack() deals exclusively with kernel mode stacks anyway, so explicitly initialise regs.cp0_status to KSU_KERNEL (i.e. 0) to ensure we get a useful backtrace. Fixes: 81a76d7119f6 ("MIPS: Avoid using unwind_stack() with usermode") Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15+ Patchwork: https://patchwork.linux-mips.org/patch/16656/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30MIPS: Perform post-DMA cache flushes on systems with MAARsPaul Burton
Recent CPUs from Imagination Technologies such as the I6400 or P6600 are able to speculatively fetch data from memory into caches. This means that if used in a system with non-coherent DMA they require that caches be invalidated after a device performs DMA, and before the CPU reads the DMA'd data, in order to ensure that stale values weren't speculatively prefetched. Such CPUs also introduced Memory Accessibility Attribute Registers (MAARs) in order to control the regions in which they are allowed to speculate. Thus we can use the presence of MAARs as a good indication that the CPU requires the above cache maintenance. Use the presence of MAARs to determine the result of cpu_needs_post_dma_flush() in the default case, in order to handle these recent CPUs correctly. Note that the return type of cpu_needs_post_dma_flush() is changed to bool, such that it's clearer what's happening when cpu_has_maar is cast to bool for the return value. If this patch were backported to a pre-v4.7 kernel then MIPS_CPU_MAAR was 1ull<<34, so when cast to an int we would incorrectly return 0. It so happens that MIPS_CPU_MAAR is currently 1ull<<30, so when truncated to an int gives a non-zero value anyway, but even so the implicit conversion from long long int to bool makes it clearer to understand what will happen than the implicit conversion from long long int to int would. The bool return type also fits this usage better semantically, so seems like an all-round win. Thanks to Ed for spotting the issue for pre-v4.7 kernels & suggesting the return type change. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: Bryan O'Donoghue <pure.logic@nexus-software.ie> Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie> Cc: Ed Blake <ed.blake@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16363/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30MIPS: Fix IRQ tracing & lockdep when reschedulingPaul Burton
When the scheduler sets TIF_NEED_RESCHED & we call into the scheduler from arch/mips/kernel/entry.S we disable interrupts. This is true regardless of whether we reach work_resched from syscall_exit_work, resume_userspace or by looping after calling schedule(). Although we disable interrupts in these paths we don't call trace_hardirqs_off() before calling into C code which may acquire locks, and we therefore leave lockdep with an inconsistent view of whether interrupts are disabled or not when CONFIG_PROVE_LOCKING & CONFIG_DEBUG_LOCKDEP are both enabled. Without tracing this interrupt state lockdep will print warnings such as the following once a task returns from a syscall via syscall_exit_partial with TIF_NEED_RESCHED set: [ 49.927678] ------------[ cut here ]------------ [ 49.934445] WARNING: CPU: 0 PID: 1 at kernel/locking/lockdep.c:3687 check_flags.part.41+0x1dc/0x1e8 [ 49.946031] DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled) [ 49.946355] CPU: 0 PID: 1 Comm: init Not tainted 4.10.0-00439-gc9fd5d362289-dirty #197 [ 49.963505] Stack : 0000000000000000 ffffffff81bb5d6a 0000000000000006 ffffffff801ce9c4 [ 49.974431] 0000000000000000 0000000000000000 0000000000000000 000000000000004a [ 49.985300] ffffffff80b7e487 ffffffff80a24498 a8000000ff160000 ffffffff80ede8b8 [ 49.996194] 0000000000000001 0000000000000000 0000000000000000 0000000077c8030c [ 50.007063] 000000007fd8a510 ffffffff801cd45c 0000000000000000 a8000000ff127c88 [ 50.017945] 0000000000000000 ffffffff801cf928 0000000000000001 ffffffff80a24498 [ 50.028827] 0000000000000000 0000000000000001 0000000000000000 0000000000000000 [ 50.039688] 0000000000000000 a8000000ff127bd0 0000000000000000 ffffffff805509bc [ 50.050575] 00000000140084e0 0000000000000000 0000000000000000 0000000000040a00 [ 50.061448] 0000000000000000 ffffffff8010e1b0 0000000000000000 ffffffff805509bc [ 50.072327] ... [ 50.076087] Call Trace: [ 50.079869] [<ffffffff8010e1b0>] show_stack+0x80/0xa8 [ 50.086577] [<ffffffff805509bc>] dump_stack+0x10c/0x190 [ 50.093498] [<ffffffff8015dde0>] __warn+0xf0/0x108 [ 50.099889] [<ffffffff8015de34>] warn_slowpath_fmt+0x3c/0x48 [ 50.107241] [<ffffffff801c15b4>] check_flags.part.41+0x1dc/0x1e8 [ 50.114961] [<ffffffff801c239c>] lock_is_held_type+0x8c/0xb0 [ 50.122291] [<ffffffff809461b8>] __schedule+0x8c0/0x10f8 [ 50.129221] [<ffffffff80946a60>] schedule+0x30/0x98 [ 50.135659] [<ffffffff80106278>] work_resched+0x8/0x34 [ 50.142397] ---[ end trace 0cb4f6ef5b99fe21 ]--- [ 50.148405] possible reason: unannotated irqs-off. [ 50.154600] irq event stamp: 400463 [ 50.159566] hardirqs last enabled at (400463): [<ffffffff8094edc8>] _raw_spin_unlock_irqrestore+0x40/0xa8 [ 50.171981] hardirqs last disabled at (400462): [<ffffffff8094eb98>] _raw_spin_lock_irqsave+0x30/0xb0 [ 50.183897] softirqs last enabled at (400450): [<ffffffff8016580c>] __do_softirq+0x4ac/0x6a8 [ 50.195015] softirqs last disabled at (400425): [<ffffffff80165e78>] irq_exit+0x110/0x128 Fix this by using the TRACE_IRQS_OFF macro to call trace_hardirqs_off() when CONFIG_TRACE_IRQFLAGS is enabled. This is done before invoking schedule() following the work_resched label because: 1) Interrupts are disabled regardless of the path we take to reach work_resched() & schedule(). 2) Performing the tracing here avoids the need to do it in paths which disable interrupts but don't call out to C code before hitting a path which uses the RESTORE_SOME macro that will call trace_hardirqs_on() or trace_hardirqs_off() as appropriate. We call trace_hardirqs_on() using the TRACE_IRQS_ON macro before calling syscall_trace_leave() for similar reasons, ensuring that lockdep has a consistent view of state after we re-enable interrupts. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Cc: linux-mips@linux-mips.org Cc: stable <stable@vger.kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/15385/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30MIPS: pm-cps: Drop manual cache-line alignment of ready_countPaul Burton
We allocate memory for a ready_count variable per-CPU, which is accessed via a cached non-coherent TLB mapping to perform synchronisation between threads within the core using LL/SC instructions. In order to ensure that the variable is contained within its own data cache line we allocate 2 lines worth of memory & align the resulting pointer to a line boundary. This is however unnecessary, since kmalloc is guaranteed to return memory which is at least cache-line aligned (see ARCH_DMA_MINALIGN). Stop the redundant manual alignment. Besides cleaning up the code & avoiding needless work, this has the side effect of avoiding an arithmetic error found by Bryan on 64 bit systems due to the 32 bit size of the former dlinesz. This led the ready_count variable to have its upper 32b cleared erroneously for MIPS64 kernels, causing problems when ready_count was later used on MIPS64 via cpuidle. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 3179d37ee1ed ("MIPS: pm-cps: add PM state entry code for CPS systems") Reported-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com> Reviewed-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com> Tested-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com> Cc: linux-mips@linux-mips.org Cc: stable <stable@vger.kernel.org> # v3.16+ Patchwork: https://patchwork.linux-mips.org/patch/15383/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30tile: remove unneeded extra-y in MakefileMasahiro Yamada
This extra-y is unneeded because vdso.lds is generated according to the dependency. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-06-30kbuild: thin archives make default for all archsNicholas Piggin
Make thin archives build the default, but keep the config option to allow exemptions if any breakage can't be quickly solved. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-06-30x86/um: thin archives build fixNicholas Piggin
The linker does not like vdso-syms.lds in input archive files. Make it an extra-y instead. Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: user-mode-linux-devel@lists.sourceforge.net Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-06-30tile: thin archives fix linkingNicholas Piggin
The VDSO symbols can't be linked into built-in.o when building with thin archives, so change this to linking a new object file that is included into the built-in.o. Cc: Chris Metcalf <cmetcalf@mellanox.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-06-30ia64: thin archives fix linkingNicholas Piggin
The VDSO symbols can't be linked into built-in.o when building with thin archives, so change this to linking a new object file that is included into the built-in.o. Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64@vger.kernel.org Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-06-30sh: thin archives fix linkingNicholas Piggin
The VDSO symbols can't be linked into built-in.o when building with thin archives, so change this to linking a new object file that is included into the built-in.o. Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: linux-sh@vger.kernel.org Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-06-30ia64: remove unneeded extra-y in Makefile.gateMasahiro Yamada
All the files listed in "extra-y" are generated according to the dependency. They are still needed in "targets" to include .*.cmd for incremental building. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-06-30tile: fix dependency and .*.cmd inclusion for incremental buildMasahiro Yamada
Build targets using if_changed(_rule) must depend on FORCE so that they are evaluated every time. In order to include .*.cmd files correctly, build targets added to "targets" must not be prefixed with $(obj)/ because it is done by scripts/Makefile.lib . Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-06-30sparc64: Use indirect calls in hamming weight stubsDavid S. Miller
Otherwise, depending upon link order, the branch relocation limits could be exceeded. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-06-29ARM: 8685/1: ensure memblock-limit is pmd-alignedDoug Berger
The pmd containing memblock_limit is cleared by prepare_page_table() which creates the opportunity for early_alloc() to allocate unmapped memory if memblock_limit is not pmd aligned causing a boot-time hang. Commit 965278dcb8ab ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM") attempted to resolve this problem, but there is a path through the adjust_lowmem_bounds() routine where if all memory regions start and end on pmd-aligned addresses the memblock_limit will be set to arm_lowmem_limit. Since arm_lowmem_limit can be affected by the vmalloc early parameter, the value of arm_lowmem_limit may not be pmd-aligned. This commit corrects this oversight such that memblock_limit is always rounded down to pmd-alignment. Fixes: 965278dcb8ab ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM") Signed-off-by: Doug Berger <opendmb@gmail.com> Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-06-29x86/ftrace: Exclude functions in head64.c from function-tracingKirill A. Shutemov
A recent commit moved most logic of early boot up from startup_64() written in assembly to __startup_64() written in C. Fengguang reported breakage due to the change. It was tracked down to CONFIG_FUNCTION_TRACER being enabled. Tracing this function is not possible because it's invoked from the earliest boot stage before the relocation fixups have been done. It is the function doing the relocation. Exclude it from being built with tracer stubs. Fixes: c88d71508e36 ("x86/boot/64: Rewrite startup_64() in C") Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: lkp@01.org Link: http://lkml.kernel.org/r/20170627115948.17938-1-kirill.shutemov@linux.intel.com
2017-06-29perf/x86/intel/uncore: Fix wrong box pointer checkKan Liang
Should not init a NULL box. It will cause system crash. The issue looks like caused by a typo. This was not noticed because there is no NULL box. Also, for most boxes, they are enabled by default. The init code is not critical. Fixes: fff4b87e594a ("perf/x86/intel/uncore: Make package handling more robust") Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20170629190926.2456-1-kan.liang@intel.com
2017-06-29arm64: ptrace: Fix incorrect get_user() use in compat_vfp_set()Dave Martin
Now that compat_vfp_get() uses the regset API to copy the FPSCR value out to userspace, compat_vfp_set() looks inconsistent. In particular, compat_vfp_set() will fail if called with kbuf != NULL && ubuf == NULL (which is valid usage according to the regset API). This patch fixes compat_vfp_set() to use user_regset_copyin(), similarly to compat_vfp_get(). This also squashes a sparse warning triggered by the cast that drops __user when calling get_user(). Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-29arm64: ptrace: Remove redundant overrun check from compat_vfp_set()Dave Martin
compat_vfp_set() checks for userspace trying to write an excessive amount of data to the regset. However this check is conspicuous for its absence from every other _set() in the arm64 ptrace implementation. In fact, the core ptrace_regset() already clamps userspace's iov_len to the regset size before the individual regset .{get,set}() methods get called. This patch removes the redundant check. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-29arm64: ptrace: Avoid setting compat FP[SC]R to garbage if get_user failsDave Martin
If get_user() fails when reading the new FPSCR value from userspace in compat_vfp_get(), then garbage* will be written to the task's FPSR and FPCR registers. This patch prevents this by checking the return from get_user() first. [*] Actually, zero, due to the behaviour of get_user() on error, but that's still not what userspace expects. Fixes: 478fcb2cdb23 ("arm64: Debugging support") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-29arm: sun8i: orangepi-2: use internal phy-modeLABBE Corentin
Since the PHY used is internal, simply set phy-mode as internal. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29arm: sun8i: nanopi-neo: use internal phy-modeLABBE Corentin
Since the PHY used is internal, simply set phy-mode as internal. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29arm: sun8i: orangepi-one: use internal phy-modeLABBE Corentin
Since the PHY used is internal, simply set phy-mode as internal. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29arm: sun8i: orangepi-zero: use internal phy-modeLABBE Corentin
Since the PHY used is internal, simply set phy-mode as internal. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>