summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2019-08-29arm64: dts: ti: k3-j721e-main: Add hwspinlock nodeSuman Anna
The Main NavSS block on J721E SoCs contains a HwSpinlock IP instance that is same as the IP on AM65x SoCs and similar to the IP on some OMAP SoCs. Add the DT node for this on J721E SoCs. The node is present within the Main NavSS block, and is added as a child node under the cbass_main_navss interconnect node. Signed-off-by: Suman Anna <s-anna@ti.com> Signed-off-by: Tero Kristo <t-kristo@ti.com>
2019-08-29arm64: dts: ti: k3-am65-main: Add hwspinlock nodeSuman Anna
The Main NavSS block on AM65x SoCs contains a HwSpinlock IP instance that is similar to the IP on some OMAP SoCs. Add the DT node for this on AM65x SoCs. The node is present within the NavSS block, and is added as a child node under the cbass_main_navss interconnect node. Signed-off-by: Suman Anna <s-anna@ti.com> Signed-off-by: Tero Kristo <t-kristo@ti.com>
2019-08-29arm64: dts: k3-j721e: Add gpio-keys on common processor boardNikhil Devshatwar
Common processor board for K3 J721E platform has two push buttons namely SW10 and SW11. Add a gpio-keys device node to model them as input keys in Linux. Add required pinmux nodes to set GPIO pins as input. Signed-off-by: Nikhil Devshatwar <nikhil.nd@ti.com> Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Reviewed-by: Keerthy <j-keerthy@ti.com> Signed-off-by: Tero Kristo <t-kristo@ti.com>
2019-08-29arm64: dts: ti: k3-j721e-common-proc-board: Disable unused gpio modulesLokesh Vutla
There are 10 gpio instances inside SoC with 3 groups as below: - Group1: main_gpio0, main_gpio2, main_gpio4, main_gpio6 - Group2: main_gpio1, main_gpio3, main_gpio5, main_gpio7 - Group3: wkup_gpio0, wkup_gpio1 Only one instance can be used in each group at a time. So use main_gpio0, main_gpio1 and wkup_gpio0 for the current linux context and mark other gpio nodes as disabled. Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Reviewed-by: Keerthy <j-keerthy@ti.com> Signed-off-by: Tero Kristo <t-kristo@ti.com>
2019-08-29arm64: dts: ti: k3-j721e: Add gpio nodes in wakeup domainLokesh Vutla
Similar to the gpio groups in main domain, there is one gpio group in wakup domain with 2 module instances in it. This gpio group pins out 84 lines(6 banks). Add DT node for these 2 gpio module instances. Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Reviewed-by: Keerthy <j-keerthy@ti.com> Signed-off-by: Tero Kristo <t-kristo@ti.com>
2019-08-29arm64: dts: ti: k3-j721e: Add gpio nodes in main domainLokesh Vutla
There are 8 instances of gpio modules in main domain divided into 2 groups: - Group1: gpio0, gpio2, gpio4, gpio6 - Group2: gpio1, gpio3, gpio5, gpio7 Groups are created to provide protection between two different processor virtual worlds. There are x gpio lines coming out of each group. Each module in a group has equal x gpio lines pinned out. There is a top level mux for selecting the module instance for each pin coming out of group. Exactly one module can be selected to control the corresponding pin. This muxing can be controlled along the pad mux configuration registers. Group1 pins out 128 lines(8 banks). Group 2 pins out 36 lines(2 banks). Add DT nodes for each module instance in the main domain. Users should make sure that correct gpio instance is selected in their pad configuration. Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Reviewed-by: Keerthy <j-keerthy@ti.com> Signed-off-by: Tero Kristo <t-kristo@ti.com>
2019-08-29arm64: dts: ti: k3-j721e: Update the power domain cellsLokesh Vutla
Update the power-domain cells to 2 and mark all devices as exclusive. Main uart 0 is the debug console for processor boards and it is used by different software entities like u-boot, atf, linux simultaneously. So just mark main_uart0 as shared device for common processor board. Reviewed-by: Nishanth Menon <nm@ti.com> Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Signed-off-by: Tero Kristo <t-kristo@ti.com>
2019-08-29arm64: dts: ti: k3-am654: Update the power domain cellsLokesh Vutla
Update the power-domain cells to 2 and mark all devices as exclusive. Main uart 0 is the debug console for based boards and it is used by different software entities like u-boot, atf, linux. So just mark main_uart0 as shared device for base board. Reviewed-by: Nishanth Menon <nm@ti.com> Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com> Signed-off-by: Tero Kristo <t-kristo@ti.com>
2019-08-29powerpc/of/pci: Rewrite pci_parse_of_flagsAlexey Kardashevskiy
The existing code uses bunch of hardcoded values from the PCI Bus Binding to IEEE Std 1275 spec; and it does so in quite non-obvious way. This defines fields from the cell#0 of the "reg" property of a PCI device and uses them for parsing. This should cause no behavioral change. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> [mpe: Unsplit some 80/81 char lines, space the code with some newlines] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190829084417.71873-1-aik@ozlabs.ru
2019-08-29ARM: 8890/1: l2x0: add marvell,ecc-enable property for auroraChris Packham
The aurora cache on the Marvell Armada-XP SoC supports ECC protection for the L2 data arrays. Add a "marvell,ecc-enable" device tree property which can be used to enable this. [jlu@pengutronix.de: use aurora specific define AURORA_ACR_ECC_EN] Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Signed-off-by: Jan Luebbe <jlu@pengutronix.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-08-29ARM: 8886/1: l2x0: support parity-enable/disable on auroraChris Packham
The aurora cache on the Marvell Armada-XP SoC supports the same tag parity features as the other l2x0 cache implementations. [jlu@pengutronix.de: use aurora specific define AURORA_ACR_PARITY_EN] Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Signed-off-by: Jan Luebbe <jlu@pengutronix.de> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-08-29ARM: 8885/1: aurora-l2: add defines for parity and ECC registersJan Luebbe
These defines will be used by subsequent patches to add support for the parity check and error correction functionality in the Aurora L2 cache controller. Signed-off-by: Jan Luebbe <jlu@pengutronix.de> Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-08-29ARM: 8887/1: aurora-l2: add prefix to MAX_RANGE_SIZEJan Luebbe
The macro name is too generic, so add a AURORA_ prefix. Signed-off-by: Jan Luebbe <jlu@pengutronix.de> Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-08-29ARM: 8902/1: l2c: move cache-aurora-l2.h to asm/hardwareJan Luebbe
This include file will be used by the AURORA EDAC code. Signed-off-by: Jan Luebbe <jlu@pengutronix.de> Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-08-29ARM: 8900/1: UNWINDER_FRAME_POINTER implementation for ClangNathan Huckleberry
The stackframe setup when compiled with clang is different. Since the stack unwinder expects the gcc stackframe setup it fails to print backtraces. This patch adds support for the clang stackframe setup. Link: https://github.com/ClangBuiltLinux/linux/issues/35 Cc: clang-built-linux@googlegroups.com Suggested-by: Tri Vo <trong@google.com> Signed-off-by: Nathan Huckleberry <nhuck@google.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-08-28ARM: 8901/1: add a criteria for pfn_valid of armzhaoyang
pfn_valid can be wrong when parsing a invalid pfn whose phys address exceeds BITS_PER_LONG as the MSB will be trimed when shifted. The issue originally arise from bellowing call stack, which corresponding to an access of the /proc/kpageflags from userspace with a invalid pfn parameter and leads to kernel panic. [46886.723249] c7 [<c031ff98>] (stable_page_flags) from [<c03203f8>] [46886.723264] c7 [<c0320368>] (kpageflags_read) from [<c0312030>] [46886.723280] c7 [<c0311fb0>] (proc_reg_read) from [<c02a6e6c>] [46886.723290] c7 [<c02a6e24>] (__vfs_read) from [<c02a7018>] [46886.723301] c7 [<c02a6f74>] (vfs_read) from [<c02a778c>] [46886.723315] c7 [<c02a770c>] (SyS_pread64) from [<c0108620>] (ret_fast_syscall+0x0/0x28) Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-08-28RISC-V: Fix FIXMAP area corruption on RV32 systemsAnup Patel
Currently, various virtual memory areas of Linux RISC-V are organized in increasing order of their virtual addresses is as follows: 1. User space area (This is lowest area and starts at 0x0) 2. FIXMAP area 3. VMALLOC area 4. Kernel area (This is highest area and starts at PAGE_OFFSET) The maximum size of user space aread is represented by TASK_SIZE. On RV32 systems, TASK_SIZE is defined as VMALLOC_START which causes the user space area to overlap the FIXMAP area. This allows user space apps to potentially corrupt the FIXMAP area and kernel OF APIs will crash whenever they access corrupted FDT in the FIXMAP area. On RV64 systems, TASK_SIZE is set to fixed 256GB and no other areas happen to overlap so we don't see any FIXMAP area corruptions. This patch fixes FIXMAP area corruption on RV32 systems by setting TASK_SIZE to FIXADDR_START. We also move FIXADDR_TOP, FIXADDR_SIZE, and FIXADDR_START defines to asm/pgtable.h so that we can avoid cyclic header includes. Signed-off-by: Anup Patel <anup.patel@wdc.com> Tested-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
2019-08-28x86/build: Add -Wnoaddress-of-packed-member to REALMODE_CFLAGS, to silence ↵Linus Torvalds
GCC9 build warning One of the very few warnings I have in the current build comes from arch/x86/boot/edd.c, where I get the following with a gcc9 build: arch/x86/boot/edd.c: In function ‘query_edd’: arch/x86/boot/edd.c:148:11: warning: taking address of packed member of ‘struct boot_params’ may result in an unaligned pointer value [-Waddress-of-packed-member] 148 | mbrptr = boot_params.edd_mbr_sig_buffer; | ^~~~~~~~~~~ This warning triggers because we throw away all the CFLAGS and then make a new set for REALMODE_CFLAGS, so the -Wno-address-of-packed-member we added in the following commit is not present: 6f303d60534c ("gcc-9: silence 'address-of-packed-member' warning") The simplest solution for now is to adjust the warning for this version of CFLAGS as well, but it would definitely make sense to examine whether REALMODE_CFLAGS could be derived from CFLAGS, so that it picks up changes in the compiler flags environment automatically. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Borislav Petkov <bp@alien8.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-08-28ARM: dts: ux500: Update thermal zoneLinus Walleij
After moving the DB8500 thermal driver to use device tree we define the default thermal zone for the Ux500 in the device tree replacing the oldstyle hardcoded trigger points. This default thermal zone utilizes the cpufreq driver (using the generic OF cpufreq back-end) as a passive cooling device, and defines a critical trip point when the temperature goes above 85 degrees celsius which will (hopefully) make the system shut down if the temperature cannot be controlled. This default policy can later be augmented for specific subdevices if these have tighter temperature conditions. After this patch we get: /sys/class/thermal/thermal_zone0 (CPU thermal zone) This reports the rough temperature and trip points from the thermal zone in the device tree. By executing two yes > /dev/null & jobs fully utilizing the two CPU cores we can notice the temperature climbing in the thermal zone in response and falling when we kill the jobs. /syc/class/thermal/cooling_device0 (cpufreq cooling) this reports all 4 available cpufreq frequencies as states. Suggested-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2019-08-28powerpc: use the generic dma coherent remap allocatorChristoph Hellwig
This switches to using common code for the DMA allocations, including potential use of the CMA allocator if configured. Switching to the generic code enables DMA allocations from atomic context, which is required by the DMA API documentation, and also adds various other minor features drivers start relying upon. It also makes sure we have on tested code base for all architectures that require uncached pte bits for coherent DMA allocations. Another advantage is that consistent memory allocations now share the general vmalloc pool instead of needing an explicit careout from it. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Christophe Leroy <christophe.leroy@c-s.fr> # tested on 8xx Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190814132230.31874-2-hch@lst.de
2019-08-28powerpc/64: remove support for kernel-mode syscallsNicholas Piggin
There is support for the kernel to execute the 'sc 0' instruction and make a system call to itself. This is a relic that is unused in the tree, therefore untested. It's also highly questionable for modules to be doing this. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190827033010.28090-3-npiggin@gmail.com
2019-08-28powerpc: convert to copy_thread_tlsNicholas Piggin
Commit 3033f14ab78c3 ("clone: support passing tls argument via C rather than pt_regs magic") introduced the HAVE_COPY_THREAD_TLS option. Use it to avoid a subtle assumption about the argument ordering of clone type syscalls. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190827033010.28090-2-npiggin@gmail.com
2019-08-28powerpc/32: don't use CPU_FTR_COHERENT_ICACHEChristophe Leroy
Only 601 and E200 have CPU_FTR_COHERENT_ICACHE. Just use #ifdefs instead of feature fixup. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5f3e92ccd64d06477b27626f6007a9da3b8da157.1566834712.git.christophe.leroy@c-s.fr
2019-08-28powerpc/32: drop CPU_FTR_UNIFIED_ID_CACHEChristophe Leroy
Only 601 and e200 have unified I/D cache. Drop the feature and use CONFIG_PPC_BOOK3S_601 and CONFIG_E200. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b5902144266d2f4eed1ffea53915bd0245841e02.1566834712.git.christophe.leroy@c-s.fr
2019-08-28powerpc/32s: use CONFIG_PPC_BOOK3S_601 instead of reading PVRChristophe Leroy
Use CONFIG_PPC_BOOK3S_601 instead of reading PVR to know if it is a 601 or not. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/909c26db9facd7fe454695b303f952e019dd9eda.1566834712.git.christophe.leroy@c-s.fr
2019-08-28powerpc/32s: drop CPU_FTR_USE_RTC featureChristophe Leroy
CPU_FTR_USE_RTC feature only applies to powerpc601. Drop this feature and replace it with tests on CONFIG_PPC_BOOK3S_601. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/170411e2360861f4a95c21faad43519a08bc4040.1566834712.git.christophe.leroy@c-s.fr
2019-08-28powerpc/32s: get rid of CPU_FTR_601 featureChristophe Leroy
Now that 601 is exclusive from other 6xx, CPU_FTR_601 and associated fixups are useless. Drop this feature and use #ifdefs instead. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ecdb7194a17dbfa01865df6a82979533adc2c70b.1566834712.git.christophe.leroy@c-s.fr
2019-08-28powerpc/32s: add an option to exclusively select powerpc 601Christophe Leroy
Powerpc 601 is rather old powerpc which as some important limitations compared to other book3s/32 powerpcs: - No Timebase. - Common BATs for instruction and data. - No execution protection in segment registers. - No RI bit in MSR - ... It is starting to be difficult and cumbersome to maintain kernels that are compatible both with 601 and other 6xx cores. Create a compiletime option to exclusively select either powerpc 601 or other 6xx. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d644eaf7dff8cc149260066802af230bdf34fded.1566834712.git.christophe.leroy@c-s.fr
2019-08-28x86/apic/vector: Warn when vector space exhaustion breaks affinityNeil Horman
On x86, CPUs are limited in the number of interrupts they can have affined to them as they only support 256 interrupt vectors per CPU. 32 vectors are reserved for the CPU and the kernel reserves another 22 for internal purposes. That leaves 202 vectors for assignement to devices. When an interrupt is set up or the affinity is changed by the kernel or the administrator, the vector assignment code attempts to honor the requested affinity mask. If the vector space on the CPUs in that affinity mask is exhausted the code falls back to a wider set of CPUs and assigns a vector on a CPU outside of the requested affinity mask silently. While the effective affinity is reflected in the corresponding /proc/irq/$N/effective_affinity* files the silent breakage of the requested affinity can lead to unexpected behaviour for administrators. Add a pr_warn() when this happens so that adminstrators get at least informed about it in the syslog. [ tglx: Massaged changelog and made the pr_warn() more informative ] Reported-by: djuran@redhat.com Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: djuran@redhat.com Link: https://lkml.kernel.org/r/20190822143421.9535-1-nhorman@tuxdriver.com
2019-08-28arm64: kpti: ensure patched kernel text is fetched from PoUMark Rutland
While the MMUs is disabled, I-cache speculation can result in instructions being fetched from the PoC. During boot we may patch instructions (e.g. for alternatives and jump labels), and these may be dirty at the PoU (and stale at the PoC). Thus, while the MMU is disabled in the KPTI pagetable fixup code we may load stale instructions into the I-cache, potentially leading to subsequent crashes when executing regions of code which have been modified at runtime. Similarly to commit: 8ec41987436d566f ("arm64: mm: ensure patched kernel text is fetched from PoU") ... we can invalidate the I-cache after enabling the MMU to prevent such issues. The KPTI pagetable fixup code itself should be clean to the PoC per the boot protocol, so no maintenance is required for this code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-28x86/vmware: Add a header file for hypercall definitionsThomas Hellstrom
The new header is intended to be used by drivers using the backdoor. Follow the KVM example using alternatives self-patching to choose between vmcall, vmmcall and io instructions. Also define two new CPU feature flags to indicate hypervisor support for vmcall- and vmmcall instructions. The new XF86_FEATURE_VMW_VMMCALL flag is needed because using XF86_FEATURE_VMMCALL might break QEMU/KVM setups using the vmmouse driver. They rely on XF86_FEATURE_VMMCALL on AMD to get the kvm_hypercall() right. But they do not yet implement vmmcall for the VMware hypercall used by the vmmouse driver. [ bp: reflow hypercall %edx usage explanation comment. ] Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Doug Covelli <dcovelli@vmware.com> Cc: Aaron Lewis <aaronlewis@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: linux-graphics-maintainer@vmware.com Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Cc: Nicolas Ferre <nicolas.ferre@microchip.com> Cc: Robert Hoo <robert.hu@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: virtualization@lists.linux-foundation.org Cc: <pv-drivers@vmware.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190828080353.12658-3-thomas_os@shipmail.org
2019-08-28arm64: fix fixmap copy for 16K pages and 48-bit VAMark Rutland
With 16K pages and 48-bit VAs, the PGD level of table has two entries, and so the fixmap shares a PGD with the kernel image. Since commit: f9040773b7bbbd9e ("arm64: move kernel image to base of vmalloc area") ... we copy the existing fixmap to the new fine-grained page tables at the PUD level in this case. When walking to the new PUD, we forgot to offset the PGD entry and always used the PGD entry at index 0, but this worked as the kernel image and fixmap were in the low half of the TTBR1 address space. As of commit: 14c127c957c1c607 ("arm64: mm: Flip kernel VA space") ... the kernel image and fixmap are in the high half of the TTBR1 address space, and hence use the PGD at index 1, but we didn't update the fixmap copying code to account for this. Thus, we'll erroneously try to copy the fixmap slots into a PUD under the PGD entry at index 0. At the point we do so this PGD entry has not been initialised, and thus we'll try to write a value to a small offset from physical address 0, causing a number of potential problems. Fix this be correctly offsetting the PGD. This is split over a few steps for legibility. Fixes: 14c127c957c1c607 ("arm64: mm: Flip kernel VA space") Reported-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Tested-by: Marc Zyngier <maz@kernel.org> Acked-by: Steve Capper <Steve.Capper@arm.com> Tested-by: Steve Capper <Steve.Capper@arm.com> Tested-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-28x86/hyperv: Hide pv_ops access for CONFIG_PARAVIRT=nTianyu Lan
hv_setup_sched_clock() references pv_ops which is only available when CONFIG_PARAVIRT=Y. Wrap it into a #ifdef Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190828080747.204419-1-Tianyu.Lan@microsoft.com
2019-08-28perf/x86/intel: Support PEBS output to PTAlexander Shishkin
If PEBS declares ability to output its data to Intel PT stream, use the aux_output attribute bit to enable PEBS data output to PT. This requires a PT event to be present and scheduled in the same context. Unlike the DS area, the kernel does not extract PEBS records from the PT stream to generate corresponding records in the perf stream, because that would require real time in-kernel PT decoding, which is not feasible. The PMI, however, can still be used. The output setting is per-CPU, so all PEBS events must be either writing to PT or to the DS area, therefore, in case of conflict, the conflicting event will fail to schedule, allowing the rotation logic to alternate between the PEBS->PT and PEBS->DS events. Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: kan.liang@linux.intel.com Link: https://lkml.kernel.org/r/20190806084606.4021-3-alexander.shishkin@linux.intel.com
2019-08-28x86/intel: Add common OPTDIFFsPeter Zijlstra
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Cc: x86@kernel.org Cc: Dave Hansen <dave.hansen@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Link: https://lkml.kernel.org/r/20190827195122.731530141@infradead.org
2019-08-28x86/intel: Aggregate microserver namingPeter Zijlstra
Currently big microservers have _XEON_D while small microservers have _X, Make it uniformly: _D. for i in `git grep -l "\(INTEL_FAM6_\|VULNWL_INTEL\|INTEL_CPU_FAM6\).*_\(X\|XEON_D\)"` do sed -i -e 's/\(\(INTEL_FAM6_\|VULNWL_INTEL\|INTEL_CPU_FAM6\).*ATOM.*\)_X/\1_D/g' \ -e 's/\(\(INTEL_FAM6_\|VULNWL_INTEL\|INTEL_CPU_FAM6\).*\)_XEON_D/\1_D/g' ${i} done Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Cc: x86@kernel.org Cc: Dave Hansen <dave.hansen@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Link: https://lkml.kernel.org/r/20190827195122.677152989@infradead.org
2019-08-28x86/intel: Aggregate big core graphics namingPeter Zijlstra
Currently big core clients with extra graphics on have: - _G - _GT3E Make it uniformly: _G for i in `git grep -l "\(INTEL_FAM6_\|VULNWL_INTEL\|INTEL_CPU_FAM6\).*_GT3E"` do sed -i -e 's/\(\(INTEL_FAM6_\|VULNWL_INTEL\|INTEL_CPU_FAM6\).*\)_GT3E/\1_G/g' ${i} done Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Cc: x86@kernel.org Cc: Dave Hansen <dave.hansen@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Link: https://lkml.kernel.org/r/20190827195122.622802314@infradead.org
2019-08-28x86/intel: Aggregate big core mobile namingPeter Zijlstra
Currently big core mobile chips have either: - _L - _ULT - _MOBILE Make it uniformly: _L. for i in `git grep -l "\(INTEL_FAM6_\|VULNWL_INTEL\|INTEL_CPU_FAM6\).*_\(MOBILE\|ULT\)"` do sed -i -e 's/\(\(INTEL_FAM6_\|VULNWL_INTEL\|INTEL_CPU_FAM6\).*\)_\(MOBILE\|ULT\)/\1_L/g' ${i} done Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Cc: x86@kernel.org Cc: Dave Hansen <dave.hansen@intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190827195122.568978530@infradead.org
2019-08-28x86/intel: Aggregate big core client namingPeter Zijlstra
Currently the big core client models either have: - no OPTDIFF - _CORE - _DESKTOP Make it uniformly: 'no OPTDIFF'. for i in `git grep -l "\(INTEL_FAM6_\|VULNWL_INTEL\|INTEL_CPU_FAM6\).*_\(CORE\|DESKTOP\)"` do sed -i -e 's/\(\(INTEL_FAM6_\|VULNWL_INTEL\|INTEL_CPU_FAM6\).*\)_\(CORE\|DESKTOP\)/\1/g' ${i} done Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Tony Luck <tony.luck@intel.com> Cc: x86@kernel.org Cc: Dave Hansen <dave.hansen@intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190827195122.513945586@infradead.org
2019-08-28x86/vmware: Update platform detection code for VMCALL/VMMCALL hypercallsThomas Hellstrom
Vmware has historically used an INL instruction for this, but recent hardware versions support using VMCALL/VMMCALL instead, so use this method if supported at platform detection time. Explicitly code separate macro versions since the alternatives self-patching has not been performed at platform detection time. Also put tighter constraints on the assembly input parameters. Co-developed-by: Doug Covelli <dcovelli@vmware.com> Signed-off-by: Doug Covelli <dcovelli@vmware.com> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Doug Covelli <dcovelli@vmware.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-graphics-maintainer@vmware.com Cc: Thomas Gleixner <tglx@linutronix.de> Cc: virtualization@lists.linux-foundation.org Cc: <pv-drivers@vmware.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190828080353.12658-2-thomas_os@shipmail.org
2019-08-28x86/cpufeature: Explain the macro duplicationCao Jin
Explain the intent behind the duplication of the BUILD_BUG_ON_ZERO(NCAPINTS != n) check in *_MASK_CHECK and its immediate use in the *MASK_BIT_SET macros too. [ bp: Massage. ] Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Cao Jin <caoj.fnst@cn.fujitsu.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nadav Amit <namit@vmware.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190828061100.27032-1-caoj.fnst@cn.fujitsu.com
2019-08-27dt-bindings: net: ethernet: Update mt7622 docs and dts to reflect the new ↵René van Dorst
phylink API This patch the removes the recently added mediatek,physpeed property. Use the fixed-link property speed = <2500> to set the phy in 2.5Gbit. See mt7622-bananapi-bpi-r64.dts for a working example. Signed-off-by: René van Dorst <opensource@vdorst.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-08-28powerpc/8xx: set STACK_END_MAGIC earlier on the init_stackChristophe Leroy
Today, the STACK_END_MAGIC is set on init_stack in start_kernel(). To avoid a false 'Thread overran stack, or stack corrupted' message on early Oopses, setup STACK_END_MAGIC as soon as possible. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/54f67bb7ac486c1350f2fa8905cd279f94b9dfb1.1566382841.git.christophe.leroy@c-s.fr
2019-08-28powerpc/8xx: drop unused self-modifying code alternative to FixupDAR.Christophe Leroy
The code which fixups the DAR on TLB errors for dbcX instructions has a self-modifying code alternative that has never been used. Drop it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Joakim Tjernlund <joakim.tjernlund@infinera.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b095e12c82fcba1ac4c09fc3b85d969f36614746.1566417610.git.christophe.leroy@c-s.fr
2019-08-28powerpc/prom: convert PROM_BUG() to standard trapChristophe Leroy
Prior to commit 1bd98d7fbaf5 ("ppc64: Update BUG handling based on ppc32"), BUG() family was using BUG_ILLEGAL_INSTRUCTION which was an invalid instruction opcode to trap into program check exception. That commit converted them to using standard trap instructions, but prom/prom_init and their PROM_BUG() macro were left over. head_64.S and exception-64s.S were left aside as well. Convert them to using the standard BUG infrastructure. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/cdaf4bbbb64c288a077845846f04b12683f8875a.1566817807.git.christophe.leroy@c-s.fr
2019-08-27Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller
Minor conflict in r8169, bug fix had two versions in net and net-next, take the net-next hunks. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-08-27KVM: x86: Don't update RIP or do single-step on faulting emulationSean Christopherson
Don't advance RIP or inject a single-step #DB if emulation signals a fault. This logic applies to all state updates that are conditional on clean retirement of the emulation instruction, e.g. updating RFLAGS was previously handled by commit 38827dbd3fb85 ("KVM: x86: Do not update EFLAGS on faulting emulation"). Not advancing RIP is likely a nop, i.e. ctxt->eip isn't updated with ctxt->_eip until emulation "retires" anyways. Skipping #DB injection fixes a bug reported by Andy Lutomirski where a #UD on SYSCALL due to invalid state with EFLAGS.TF=1 would loop indefinitely due to emulation overwriting the #UD with #DB and thus restarting the bad SYSCALL over and over. Cc: Nadav Amit <nadav.amit@gmail.com> Cc: stable@vger.kernel.org Reported-by: Andy Lutomirski <luto@kernel.org> Fixes: 663f4c61b803 ("KVM: x86: handle singlestep during emulation") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2019-08-27KVM: x86: hyper-v: don't crash on KVM_GET_SUPPORTED_HV_CPUID when ↵Vitaly Kuznetsov
kvm_intel.nested is disabled If kvm_intel is loaded with nested=0 parameter an attempt to perform KVM_GET_SUPPORTED_HV_CPUID results in OOPS as nested_get_evmcs_version hook in kvm_x86_ops is NULL (we assign it in nested_vmx_hardware_setup() and this only happens in case nested is enabled). Check that kvm_x86_ops->nested_get_evmcs_version is not NULL before calling it. With this, we can remove the stub from svm as it is no longer needed. Cc: <stable@vger.kernel.org> Fixes: e2e871ab2f02 ("x86/kvm/hyper-v: Introduce nested_get_evmcs_version() helper") Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2019-08-27Merge tag 'arc-5.3-rc7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc Pull ARC updates from Vineet Gupta: - support for Edge Triggered IRQs in ARC IDU intc - other fixes here and there * tag 'arc-5.3-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: arc: prefer __section from compiler_attributes.h dt-bindings: IDU-intc: Add support for edge-triggered interrupts dt-bindings: IDU-intc: Clean up documentation ARCv2: IDU-intc: Add support for edge-triggered interrupts ARC: unwind: Mark expected switch fall-throughs ARC: [plat-hsdk]: allow to switch between AXI DMAC port configurations ARC: fix typo in setup_dma_ops log message ARCv2: entry: early return from exception need not clear U & DE bits
2019-08-27arm64: KVM: Device mappings should be execute-neverJames Morse
Since commit 2f6ea23f63cca ("arm64: KVM: Avoid marking pages as XN in Stage-2 if CTR_EL0.DIC is set"), KVM has stopped marking normal memory as execute-never at stage2 when the system supports D->I Coherency at the PoU. This avoids KVM taking a trap when the page is first executed, in order to clean it to PoU. The patch that added this change also wrapped PAGE_S2_DEVICE mappings up in this too. The upshot is, if your CPU caches support DIC ... you can execute devices. Revert the PAGE_S2_DEVICE change so PTE_S2_XN is always used directly. Fixes: 2f6ea23f63cca ("arm64: KVM: Avoid marking pages as XN in Stage-2 if CTR_EL0.DIC is set") Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>