summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2017-06-23arm64: defconfig: enable EFI_CAPSULE_LOADERTimur Tabi
CONFIG_EFI_CAPSULE_LOADER allows the user to update the EFI firmware, which is useful on ARM64 server platforms. Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2017-06-23arm64: defconfig: enable BLK_DEV_NVMETimur Tabi
NVME is non-volatile storage media attached via PCIe. NVME devices typically have much higher potential throughput than other block devices, like SATA, NVME is a must-have requirement for ARM64 based servers. Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2017-06-23arm64: defconfig: enable ACPI_CPPC_CPUFREQTimur Tabi
The CPPC CPUFreq driver is used on many ACPI-based ARM64 server systems. Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2017-06-23Merge tag 'samsung-defconfig-4.13-2' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux into next/defconfig Pull "ARM defconfig cleanup" from Krzysztof Kozłowski: 1. Remove old Kconfig options from all ARM configs, 2. Update Samsung defconfigs to bring back options over time got disabled for some reason (configs were not updated along with the code), 3. Save defconfigs for Samsung. * tag 'samsung-defconfig-4.13-2' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux: ARM: tct_hammer_defconfig: Save defconfig ARM: s5pv210_defconfig: Save defconfig ARM: s3c6400_defconfig: Save defconfig ARM: mini2440_defconfig: Save defconfig ARM: s3c2410_defconfig: Save defconfig ARM: exynos_defconfig: Save defconfig ARM: s5pv210_defconfig: Bring back lost (but wanted) options ARM: s3c6400_defconfig: Bring back lost (but wanted) options ARM: s3c2410_defconfig: Bring back lost (but wanted) options ARM: tct_hammer_defconfig: Bring back lost (but wanted) options ARM: mini2440_defconfig: Bring back lost (but wanted) options ARM: defconfig: samsung: Re-order entries to match savedefconfig ARM: defconfig: Cleanup from old Kconfig options
2017-06-23powerpc: Only obtain cpu_hotplug_lock if called by rtasdThiago Jung Bauermann
Calling arch_update_cpu_topology from a CPU hotplug state machine callback hits a deadlock because the function tries to get a read lock on cpu_hotplug_lock while the state machine still holds a write lock on it. Since all callers of arch_update_cpu_topology except rtasd already hold cpu_hotplug_lock, this patch changes the function to use stop_machine_cpuslocked and creates a separate function for rtasd which still tries to obtain the lock. Michael Bringmann investigated the bug and provided a detailed analysis of the deadlock on this previous RFC for an alternate solution: Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Cc: John Allen <jallen@linux.vnet.ibm.com> Cc: Michael Bringmann <mwb@linux.vnet.ibm.com> Cc: Nathan Fontenot <nfont@linux.vnet.ibm.com> Cc: linuxppc-dev@lists.ozlabs.org Link: http://lkml.kernel.org/r/1497996510-4032-1-git-send-email-bauerman@linux.vnet.ibm.com Link: https://patchwork.ozlabs.org/patch/771293/
2017-06-22Merge commit '8e8320c9315c' into for-4.13/blockJens Axboe
Pull in the fix for shared tags, as it conflicts with the pending changes in for-4.13/block. We already pulled in v4.12-rc5 to solve other conflicts or get fixes that went into 4.12, so not a lot of changes in this merge. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-06-23powerpc/64: Initialise thread_info for emergency stacksNicholas Piggin
Emergency stacks have their thread_info mostly uninitialised, which in particular means garbage preempt_count values. Emergency stack code runs with interrupts disabled entirely, and is used very rarely, so this has been unnoticed so far. It was found by a proposed new powerpc watchdog that takes a soft-NMI directly from the masked_interrupt handler and using the emergency stack. That crashed at BUG_ON(in_nmi()) in nmi_enter(). preempt_count()s were found to be garbage. To fix this, zero the entire THREAD_SIZE allocation, and initialize the thread_info. Cc: stable@vger.kernel.org Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Move it all into setup_64.c, use a function not a macro. Fix crashes on Cell by setting preempt_count to 0 not HARDIRQ_OFFSET] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-06-22gcc-plugins: Add the randstruct pluginKees Cook
This randstruct plugin is modified from Brad Spengler/PaX Team's code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. The randstruct GCC plugin randomizes the layout of selected structures at compile time, as a probabilistic defense against attacks that need to know the layout of structures within the kernel. This is most useful for "in-house" kernel builds where neither the randomization seed nor other build artifacts are made available to an attacker. While less useful for distribution kernels (where the randomization seed must be exposed for third party kernel module builds), it still has some value there since now all kernel builds would need to be tracked by an attacker. In more performance sensitive scenarios, GCC_PLUGIN_RANDSTRUCT_PERFORMANCE can be selected to make a best effort to restrict randomization to cacheline-sized groups of elements, and will not randomize bitfields. This comes at the cost of reduced randomization. Two annotations are defined,__randomize_layout and __no_randomize_layout, which respectively tell the plugin to either randomize or not to randomize instances of the struct in question. Follow-on patches enable the auto-detection logic for selecting structures for randomization that contain only function pointers. It is disabled here to assist with bisection. Since any randomized structs must be initialized using designated initializers, __randomize_layout includes the __designated_init annotation even when the plugin is disabled so that all builds will require the needed initialization. (With the plugin enabled, annotations for automatically chosen structures are marked as well.) The main differences between this implemenation and grsecurity are: - disable automatic struct selection (to be enabled in follow-up patch) - add designated_init attribute at runtime and for manual marking - clarify debugging output to differentiate bad cast warnings - add whitelisting infrastructure - support gcc 7's DECL_ALIGN and DECL_MODE changes (Laura Abbott) - raise minimum required GCC version to 4.7 Earlier versions of this patch series were ported by Michael Leibowitz. Signed-off-by: Kees Cook <keescook@chromium.org>
2017-06-22ARM64: dts: meson-gxl: Add Libre Technology CC supportJerome Brunet
Add support for the CC board from Shenzhen Libre Technology More information about the board are available here: https://libre.computer/blog/ Cc: Neil Armstrong <narmstrong@baylibre.com> Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Signed-off-by: Kevin Hilman <khilman@baylibre.com>
2017-06-22arm/arm64: KVM: add guest SEA supportTyler Baicar
Currently external aborts are unsupported by the guest abort handling. Add handling for SEAs so that the host kernel reports SEAs which occur in the guest kernel. When an SEA occurs in the guest kernel, the guest exits and is routed to kvm_handle_guest_abort(). Prior to this patch, a print message of an unsupported FSC would be printed and nothing else would happen. With this patch, the code gets routed to the APEI handling of SEAs in the host kernel to report the SEA information. Signed-off-by: Tyler Baicar <tbaicar@codeaurora.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-22acpi: apei: handle SEA notification type for ARMv8Tyler Baicar
ARM APEI extension proposal added SEA (Synchronous External Abort) notification type for ARMv8. Add a new GHES error source handling function for SEA. If an error source's notification type is SEA, then this function can be registered into the SEA exception handler. That way GHES will parse and report SEA exceptions when they occur. An SEA can interrupt code that had interrupts masked and is treated as an NMI. To aid this the page of address space for mapping APEI buffers while in_nmi() is always reserved, and ghes_ioremap_pfn_nmi() is changed to use the helper methods to find the prot_t to map with in the same way as ghes_ioremap_pfn_irq(). Signed-off-by: Tyler Baicar <tbaicar@codeaurora.org> CC: Jonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Reviewed-by: James Morse <james.morse@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-22arm64: exception: handle Synchronous External AbortTyler Baicar
SEA exceptions are often caused by an uncorrected hardware error, and are handled when data abort and instruction abort exception classes have specific values for their Fault Status Code. When SEA occurs, before killing the process, report the error in the kernel logs. Update fault_info[] with specific SEA faults so that the new SEA handler is used. Signed-off-by: Tyler Baicar <tbaicar@codeaurora.org> CC: Jonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Reviewed-by: James Morse <james.morse@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [will: use NULL instead of 0 when assigning si_addr] Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-22arm64: Remove a redundancy in sysreg.hStefan Traby
This is really trivial; there is a dup (1 << 16) in the code Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Stefan Traby <stefan@hello-penguin.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2017-06-22x86/apic: Mark single target interruptsThomas Gleixner
If the interrupt destination mode of the APIC is physical then the effective affinity is restricted to a single CPU. Mark the interrupt accordingly in the domain allocation code, so the core code can avoid pointless affinity setting attempts. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235447.508846202@linutronix.de
2017-06-22x86/apic: Implement effective irq mask updateThomas Gleixner
Add the effective irq mask update to the apic implementations and enable effective irq masks for x86. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235446.878370703@linutronix.de
2017-06-22x86/apic: Add irq_data argument to apic->cpu_mask_to_apicid()Thomas Gleixner
The decision to which CPUs an interrupt is effectively routed happens in the various apic->cpu_mask_to_apicid() implementations To support effective affinity masks this information needs to be updated in irq_data. Add a pointer to irq_data to the callbacks and feed it through the call chain. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235446.720739075@linutronix.de
2017-06-22x86/apic: Move cpumask and to core codeThomas Gleixner
All implementations of apic->cpu_mask_to_apicid_and() and the two incoming cpumasks to search for the target. Move that operation to the call site and rename it to cpu_mask_to_apicid() Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235446.641575516@linutronix.de
2017-06-22x86/apic: Move online masking to core codeThomas Gleixner
All implementations of apic->cpu_mask_to_apicid_and() mask out the offline cpus. The callsite already has a mask available, which has the offline CPUs removed. Use that and remove the extra bits. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235446.560868224@linutronix.de
2017-06-22x86/uv: Use default_cpu_mask_to_apicid_and()Thomas Gleixner
Same functionality except the extra bits ored on the apicid. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235446.482841015@linutronix.de
2017-06-22x86/apic: Move flat_cpu_mask_to_apicid_and() into C sourceThomas Gleixner
No point in having inlines assigned to function pointers at multiple places. Just bloats the text. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235446.405975721@linutronix.de
2017-06-22x86/irq: Use irq_migrate_all_off_this_cpu()Thomas Gleixner
The generic migration code supports all the required features already. Remove the x86 specific implementation and use the generic one. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235445.851311033@linutronix.de
2017-06-22x86/irq: Restructure fixup_irqs()Thomas Gleixner
Reorder fixup_irqs() so it matches the flow in the generic migration code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235445.774272454@linutronix.de
2017-06-22genirq/cpuhotplug: Add support for cleaning up move in progressThomas Gleixner
In order to move x86 to the generic hotplug migration code, add support for cleaning up move in progress bits. On architectures which have this x86 specific (mis)feature not enabled, this is optimized out by the compiler. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235445.525817311@linutronix.de
2017-06-22x86/irq: Cleanup pending irq move in fixup_irqs()Thomas Gleixner
If an CPU goes offline, the interrupts are migrated away, but a eventually pending interrupt move, which has not yet been made effective is kept pending even if the outgoing CPU is the sole target of the pending affinity mask. What's worse is, that the pending affinity mask is discarded even if it would contain a valid subset of the online CPUs. Use the newly introduced helper to: - Discard a pending move when the outgoing CPU is the only target in the pending mask. - Use the pending mask instead of the affinity mask to find a valid target for the CPU if the pending mask intersects with the online CPUs. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235444.774068557@linutronix.de
2017-06-22x86/msi: Create named irq domainsThomas Gleixner
Use the fwnode to create named irq domains so diagnosis works. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235444.299024560@linutronix.de
2017-06-22x86/msi: Remove unused remap irq domain interfaceThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235444.221049665@linutronix.de
2017-06-22x86/msi: Provide new iommu irqdomain interfaceThomas Gleixner
Provide a new interface for creating the iommu remapping domains, so that the caller can supply a name and a id in order to create named irqdomains. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Joerg Roedel <joro@8bytes.org> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: iommu@lists.linux-foundation.org Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235443.986661206@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-06-22x86/uv: Create named irq domainThomas Gleixner
Use the fwnode to create a named domain so diagnosis works. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235443.907511074@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-06-22x86/htirq: Create named domainThomas Gleixner
Use the fwnode to create a named domain so diagnosis works. Mark the init function __init while at it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235443.829047007@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-06-22x86/ioapic: Create named irq domainThomas Gleixner
Use the fwnode to create a named domain so diagnosis works, but only when the the ioapic is not device tree based. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235443.752782603@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-06-22x86/vector: Create named irq domainThomas Gleixner
Use the fwnode to create a named domain so diagnosis works. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235443.673635238@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-06-22x86/apic: Add name to irq chipThomas Gleixner
Add the missing name, so debugging will work proper. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Keith Busch <keith.busch@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Link: http://lkml.kernel.org/r/20170619235443.266561988@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-06-22arm64: dump cpu_hwcaps at panic timeMark Rutland
When debugging a kernel panic(), it can be useful to know which CPU features have been detected by the kernel, as some code paths can depend on these (and may have been patched at runtime). This patch adds a notifier to dump the detected CPU caps (as a hex string) at panic(), when we log other information useful for debugging. On a Juno R1 system running v4.12-rc5, this looks like: [ 615.431249] Kernel panic - not syncing: Fatal exception in interrupt [ 615.437609] SMP: stopping secondary CPUs [ 615.441872] Kernel Offset: disabled [ 615.445372] CPU features: 0x02086 [ 615.448522] Memory Limit: none A developer can decode this by looking at the corresponding <asm/cpucaps.h> bits. For example, the above decodes as: * bit 1: ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE * bit 2: ARM64_WORKAROUND_845719 * bit 7: ARM64_WORKAROUND_834220 * bit 13: ARM64_HAS_32BIT_EL0 Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Steve Capper <steve.capper@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-22arm64: ptrace: Flush user-RW TLS reg to thread_struct before readingDave Martin
When reading current's user-writable TLS register (which occurs when dumping core for native tasks), it is possible that userspace has modified it since the time the task was last scheduled out. The new TLS register value is not guaranteed to have been written immediately back to thread_struct in this case. As a result, a coredump can capture stale data for this register. Reading the register for a stopped task via ptrace is unaffected. For native tasks, this patch explicitly flushes the TPIDR_EL0 register back to thread_struct before dumping when operating on current, thus ensuring that coredump contents are up to date. For compat tasks, the TLS register is not user-writable and so cannot be out of sync, so no flush is required in compat_tls_get(). Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-22arm64: ptrace: Flush FPSIMD regs back to thread_struct before readingDave Martin
When reading the FPSIMD state of current (which occurs when dumping core), it is possible that userspace has modified the FPSIMD registers since the time the task was last scheduled out. Such changes are not guaranteed to be reflected immedately in thread_struct. As a result, a coredump can contain stale values for these registers. Reading the registers of a stopped task via ptrace is unaffected. This patch explicitly flushes the CPU state back to thread_struct before dumping when operating on current, thus ensuring that coredump contents are up to date. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-22arm64: ptrace: Fix VFP register dumping in compat coredumpsDave Martin
Currently, VFP registers are omitted from coredumps for compat processes, due to a bug in the REGSET_COMPAT_VFP regset implementation. compat_vfp_get() needs to transfer non-contiguous data from thread_struct.fpsimd_state, and uses put_user() to handle the offending trailing word (FPSCR). This fails when copying to a kernel address (i.e., kbuf && !ubuf), which is what happens when dumping core. As a result, the ELF coredump core code silently omits the NT_ARM_VFP note from the dump. It would be possible to work around this with additional special case code for the put_user(), but since user_regset_copyout() is explicitly designed to handle this scenario it is cleaner to port the put_user() to a user_regset_copyout() call, which this patch does. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-06-22KVM: x86: fix singlestepping over syscallPaolo Bonzini
TF is handled a bit differently for syscall and sysret, compared to the other instructions: TF is checked after the instruction completes, so that the OS can disable #DB at a syscall by adding TF to FMASK. When the sysret is executed the #DB is taken "as if" the syscall insn just completed. KVM emulates syscall so that it can trap 32-bit syscall on Intel processors. Fix the behavior, otherwise you could get #DB on a user stack which is not nice. This does not affect Linux guests, as they use an IST or task gate for #DB. This fixes CVE-2017-7518. Cc: stable@vger.kernel.org Reported-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-06-22Merge tag 'kvm-s390-master-4.12-2' of ↵Radim Krčmář
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux KVM: s390: fix shadow table handling for nested guests Some odd-ball cases (real-space designation ASCEs) are handled wrong for the shadow page tables. Fix it.
2017-06-22x86/tsc: Call check_system_tsc_reliable() before unsynchronized_tsc()Zhenzhong Duan
tsc_clocksource_reliable is initialized in check_system_tsc_reliable(), but it is checked in unsynchronized_tsc() which is called before the initialization. In practice that's not an issue because systems which mark the TSC reliable have X86_FEATURE_CONSTANT_TSC set as well, which is evaluated in unsynchronized_tsc() before tsc_clocksource_reliable. Reorder the calls so initialization happens before usage. [ tglx: Massaged changelog ] Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/b1532ef7-cd9f-45f7-9f49-48dd2a5c2495@default
2017-06-22microblaze: Fix MSR flags when returning from exceptionMichal Simek
The issue was that the service routine was sometimes returning with the wrong flags set in the MSR. In this case, EIP bit was set while returning to User Mode which is an illegal combination since exceptions are always handled in privileged mode. In order for MicroBlaze to take an interrupt, the MSR must have IE=1, BIP=0 and EIP=0. Signed-off-by: Stefan Asserhall <stefana@xilinx.com> Signed-off-by: Goran Bilski <goran@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2017-06-22microblaze: Separate GP registers from MSR handlingMichal Simek
Separate general purpose register restoring from MSR handling. Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2017-06-22microblaze: Enabling CONFIG_BRIDGE in mmu_defconfigVineeth Chowdary Karumanchi
This patch enables CONFIG_BRIDGE=m by default to be aligned with Xilinx defaults. Signed-off-by: Vineeth Chowdary Karumanchi <vineethchowz.chowdary@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2017-06-22microblaze: Enabling CONFIGS related to MTDVineeth Chowdary Karumanchi
Add support for Intel and AMD flash devices by default for mmu configuration. Signed-off-by: Vineeth Chowdary Karumanchi <vineethchowz.chowdary@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2017-06-22microblaze: Update defconfigsMichal Simek
Run "make savedefconfig" to bring up to date. Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2017-06-22microblaze: mm: Flush TLB to ensure correct mapping when higmem ONMichal Simek
MMU contains invalid mapping which wasn't flushed and new mapping is using the same addresses as previous one. That's why TLB miss is not happening to get new correct TLB entry and MMU points to incorrect area. This is replicatable when large files(256MB and more) are copied and checked. Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2017-06-22x86/hyperv: Read TSC frequency from a synthetic MSRVitaly Kuznetsov
It was found that SMI_TRESHOLD of 50000 is not enough for Hyper-V guests in nested environment and falling back to counting jiffies is not an option for Gen2 guests as they don't have PIT. As Hyper-V provides TSC frequency in a synthetic MSR we can just use this information instead of doing a error prone calibration. Reported-and-tested-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Jork Loeser <jloeser@microsoft.com> Cc: devel@linuxdriverproject.org Cc: "K. Y. Srinivasan" <kys@microsoft.com> Link: http://lkml.kernel.org/r/20170622100730.18112-3-vkuznets@redhat.com
2017-06-22x86/hyperv: Check frequency MSRs presence according to the specificationVitaly Kuznetsov
Hyper-V TLFS specifies two bits which should be checked before accessing frequency MSRs: - AccessFrequencyMsrs (BIT(11) in EAX) which indicates if we have access to frequency MSRs. - FrequencyMsrsAvailable (BIT(8) in EDX) which indicates is these MSRs are present. Rename and specify these bits accordingly. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Ladi Prosek <lprosek@redhat.com> Cc: Jork Loeser <jloeser@microsoft.com> Cc: devel@linuxdriverproject.org Cc: "K. Y. Srinivasan" <kys@microsoft.com> Link: http://lkml.kernel.org/r/20170622100730.18112-2-vkuznets@redhat.com
2017-06-22powerpc/powernv/npu-dma: Add explicit flush when sending an ATSDAlistair Popple
NPU2 requires an extra explicit flush to an active GPU PID when sending address translation shoot downs (ATSDs) to reliably flush the GPU TLB. This patch adds just such a flush at the end of each sequence of ATSDs. We can safely use PID 0 which is always reserved and active on the GPU. PID 0 is only used for init_mm which will never be a user mm on the GPU. To enforce this we add a check in pnv_npu2_init_context() just in case someone tries to use PID 0 on the GPU. Signed-off-by: Alistair Popple <alistair@popple.id.au> [mpe: Use true/false for bool literals] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-06-22KVM: s390: gaccess: fix real-space designation asce handling for gmap shadowsHeiko Carstens
For real-space designation asces the asce origin part is only a token. The asce token origin must not be used to generate an effective address for storage references. This however is erroneously done within kvm_s390_shadow_tables(). Furthermore within the same function the wrong parts of virtual addresses are used to generate a corresponding real address (e.g. the region second index is used as region first index). Both of the above can result in incorrect address translations. Only for real space designations with a token origin of zero and addresses below one megabyte the translation was correct. Furthermore replace a "!asce.r" statement with a "!*fake" statement to make it more obvious that a specific condition has nothing to do with the architecture, but with the fake handling of real space designations. Fixes: 3218f7094b6b ("s390/mm: support real-space for gmap shadows") Cc: David Hildenbrand <david@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2017-06-22KVM: s390: avoid packed attributeMartin Schwidefsky
For naturally aligned and sized data structures avoid superfluous packed and aligned attributes. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>