summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2019-08-27Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds
Pull networking fixes from David Miller: 1) Use 32-bit index for tails calls in s390 bpf JIT, from Ilya Leoshkevich. 2) Fix missed EPOLLOUT events in TCP, from Eric Dumazet. Same fix for SMC from Jason Baron. 3) ipv6_mc_may_pull() should return 0 for malformed packets, not -EINVAL. From Stefano Brivio. 4) Don't forget to unpin umem xdp pages in error path of xdp_umem_reg(). From Ivan Khoronzhuk. 5) Fix sta object leak in mac80211, from Johannes Berg. 6) Fix regression by not configuring PHYLINK on CPU port of bcm_sf2 switches. From Florian Fainelli. 7) Revert DMA sync removal from r8169 which was causing regressions on some MIPS Loongson platforms. From Heiner Kallweit. 8) Use after free in flow dissector, from Jakub Sitnicki. 9) Fix NULL derefs of net devices during ICMP processing across collect_md tunnels, from Hangbin Liu. 10) proto_register() memory leaks, from Zhang Lin. 11) Set NLM_F_MULTI flag in multipart netlink messages consistently, from John Fastabend. * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (66 commits) r8152: Set memory to all 0xFFs on failed reg reads openvswitch: Fix conntrack cache with timeout ipv4: mpls: fix mpls_xmit for iptunnel nexthop: Fix nexthop_num_path for blackhole nexthops net: rds: add service level support in rds-info net: route dump netlink NLM_F_MULTI flag missing s390/qeth: reject oversized SNMP requests sock: fix potential memory leak in proto_register() MAINTAINERS: Add phylink keyword to SFF/SFP/SFP+ MODULE SUPPORT xfrm/xfrm_policy: fix dst dev null pointer dereference in collect_md mode ipv4/icmp: fix rt dst dev null pointer dereference openvswitch: Fix log message in ovs conntrack bpf: allow narrow loads of some sk_reuseport_md fields with offset > 0 bpf: fix use after free in prog symbol exposure bpf: fix precision tracking in presence of bpf2bpf calls flow_dissector: Fix potential use-after-free on BPF_PROG_DETACH Revert "r8169: remove not needed call to dma_sync_single_for_device" ipv6: propagate ipv6_add_dev's error returns out of ipv6_find_idev net/ncsi: Fix the payload copying for the request coming from Netlink qed: Add cleanup in qed_slowpath_start() ...
2019-08-27arm64: kvm: Replace hardcoded '1' with SYS_PAR_EL1_FWill Deacon
Now that we have a definition for the 'F' field of PAR_EL1, use that instead of coding the immediate directly. Acked-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-27arm64: mm: Ignore spurious translation faults taken from the kernelWill Deacon
Thanks to address translation being performed out of order with respect to loads and stores, it is possible for a CPU to take a translation fault when accessing a page that was mapped by a different CPU. For example, in the case that one CPU maps a page and then sets a flag to tell another CPU: CPU 0 ----- MOV X0, <valid pte> STR X0, [Xptep] // Store new PTE to page table DSB ISHST ISB MOV X1, #1 STR X1, [Xflag] // Set the flag CPU 1 ----- loop: LDAR X0, [Xflag] // Poll flag with Acquire semantics CBZ X0, loop LDR X1, [X2] // Translates using the new PTE then the final load on CPU 1 can raise a translation fault because the translation can be performed speculatively before the read of the flag and marked as "faulting" by the CPU. This isn't quite as bad as it sounds since, in reality, code such as: CPU 0 CPU 1 ----- ----- spin_lock(&lock); spin_lock(&lock); *ptr = vmalloc(size); if (*ptr) spin_unlock(&lock); foo = **ptr; spin_unlock(&lock); will not trigger the fault because there is an address dependency on CPU 1 which prevents the speculative translation. However, more exotic code where the virtual address is known ahead of time, such as: CPU 0 CPU 1 ----- ----- spin_lock(&lock); spin_lock(&lock); set_fixmap(0, paddr, prot); if (mapped) mapped = true; foo = *fix_to_virt(0); spin_unlock(&lock); spin_unlock(&lock); could fault. This can be avoided by any of: * Introducing broadcast TLB maintenance on the map path * Adding a DSB;ISB sequence after checking a flag which indicates that a virtual address is now mapped * Handling the spurious fault Given that we have never observed a problem due to this under Linux and future revisions of the architecture are being tightened so that translation table walks are effectively ordered in the same way as explicit memory accesses, we no longer treat spurious kernel faults as fatal if an AT instruction indicates that the access does not trigger a translation fault. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-27arm64: sysreg: Add some field definitions for PAR_EL1Will Deacon
PAR_EL1 is a mysterious creature, but sometimes it's necessary to read it when translating addresses in situations where we cannot walk the page table directly. Add a couple of system register definitions for the fault indication field ('F') and the fault status code ('FST'). Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-27arm64: mm: Add ISB instruction to set_pgd()Will Deacon
Commit 6a4cbd63c25a ("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"") reintroduced ISB instructions to some of our page table setter functions in light of a recent clarification to the Armv8 architecture. Although 'set_pgd()' isn't currently used to update a live page table, add the ISB instruction there too for consistency with the other macros and to provide some future-proofing if we use it on live tables in the future. Reported-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-27arm64: tlb: Ensure we execute an ISB following walk cache invalidationWill Deacon
05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable") added a new TLB invalidation helper which is used when freeing intermediate levels of page table used for kernel mappings, but is missing the required ISB instruction after completion of the TLBI instruction. Add the missing barrier. Cc: <stable@vger.kernel.org> Fixes: 05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable") Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-27Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"Will Deacon
This reverts commit 24fe1b0efad4fcdd32ce46cffeab297f22581707. Commit 24fe1b0efad4fcdd ("arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}") removed ISB instructions immediately following updates to the page table, on the grounds that they are not required by the architecture and a DSB alone is sufficient to ensure that subsequent data accesses use the new translation: DDI0487E_a, B2-128: | ... no instruction that appears in program order after the DSB | instruction can alter any state of the system or perform any part of | its functionality until the DSB completes other than: | | * Being fetched from memory and decoded | * Reading the general-purpose, SIMD and floating-point, | Special-purpose, or System registers that are directly or indirectly | read without causing side-effects. However, the same document also states the following: DDI0487E_a, B2-125: | DMB and DSB instructions affect reads and writes to the memory system | generated by Load/Store instructions and data or unified cache | maintenance instructions being executed by the PE. Instruction fetches | or accesses caused by a hardware translation table access are not | explicit accesses. which appears to claim that the DSB alone is insufficient. Unfortunately, some CPU designers have followed the second clause above, whereas in Linux we've been relying on the first. This means that our mapping sequence: MOV X0, <valid pte> STR X0, [Xptep] // Store new PTE to page table DSB ISHST LDR X1, [X2] // Translates using the new PTE can actually raise a translation fault on the load instruction because the translation can be performed speculatively before the page table update and then marked as "faulting" by the CPU. For user PTEs, this is ok because we can handle the spurious fault, but for kernel PTEs and intermediate table entries this results in a panic(). Revert the offending commit to reintroduce the missing barriers. Cc: <stable@vger.kernel.org> Fixes: 24fe1b0efad4fcdd ("arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}") Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-27arm64: smp: Treat unknown boot failures as being 'stuck in kernel'Will Deacon
When we fail to bring a secondary CPU online and it fails in an unknown state, we should assume the worst and increment 'cpus_stuck_in_kernel' so that things like kexec() are disabled. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-27arm64: smp: Don't enter kernel with NULL stack pointer or task structWill Deacon
Although SMP bringup is inherently racy, we can significantly reduce the window during which secondary CPUs can unexpectedly enter the kernel by sanity checking the 'stack' and 'task' fields of the 'secondary_data' structure. If the booting CPU gave up waiting for us, then they will have been cleared to NULL and we should spin in a WFE; WFI loop instead. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-27arm64: smp: Increase secondary CPU boot timeout valueWill Deacon
When many debug options are enabled simultaneously (e.g. PROVE_LOCKING, KMEMLEAK, DEBUG_PAGE_ALLOC, KASAN etc), it is possible for us to timeout when attempting to boot a secondary CPU and give up. Unfortunately, the CPU will /eventually/ appear, and sit in the background happily stuck in a recursive exception due to a NULL stack pointer. Increase the timeout to 5s, which will of course be enough for anybody. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-08-27ARM: dts: kirkwood: ts219: disable the SoC's RTCUwe Kleine-König
The internal RTC doesn't work, loading the driver only yields rtc-mv f1010300.rtc: internal RTC not ticking . So disable it. Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Uwe Kleine-König <uwe@kleine-koenig.org> Acked-by: Martin Michlmayr <tbm@cyrius.com> Acked-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2019-08-27arm64: dts: marvell: Add cpu clock node on Armada 7K/8KGregory CLEMENT
Add cpu clock node on AP Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2019-08-27arm64: dts: marvell: Convert 7k/8k usb-phy properties to phy-supplyMiquel Raynal
Update Aramda 7k/8k DTs to use the phy-supply property of the (recent) generic PHY framework instead of the (legacy) usb-phy preperty. Both enable the supply when the PHY is enabled. The COMPHY nodes only provide SERDES lanes configuration. The power supply that is represented by the phy-supply property is just a regulator wired to the USB connector, hence the creation of connector nodes as child of the COMPHY nodes and the supply attached to it. Cc: Martin Blumenstingl <martin.blumenstingl@googlemail.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2019-08-27arm64: dts: marvell: Add 7k/8k PHYs in PCIe nodesMiquel Raynal
Fill-in the missing PCIe phys/phy-names DT properties of Armada 7k/8k based boards. The MacchiatoBin is a bit particular as the Armada8k-PCI IP supports x4 link widths and in this case the PHY for each lane must be referenced. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2019-08-27arm64: dts: marvell: Add 7k/8k PHYs in USB3 nodesMiquel Raynal
Fill-in the missing USB3 phys/phy-names DT properties of Armada 7k/8k based boards. Only update nodes actually enabling USB3 in the default (mainline) configuration. A few USB nodes are enabled but there is only USB2 working on them. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2019-08-27arm64: dts: marvell: Add 7k/8k per-port PHYs in SATA nodesMiquel Raynal
Fill-in the missing SATA phys/phy-names DT properties of Armada 7k/8k based boards. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2019-08-27arm64: dts: marvell: Add CP110 COMPHY clocksMiquel Raynal
Declare the three clocks feeding the COMPHY block. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2019-08-27Merge tag 'kvm-ppc-fixes-5.3-1' of ↵Radim Krčmář
git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc KVM/PPC fix for 5.3 - Fix bug which could leave locks locked in the host on return to a guest.
2019-08-27arm64: dts: marvell: armada-37xx: add mailbox nodeMarek Behún
This adds the rWTM BIU mailbox node for communication with the secure processor. The driver already exists in drivers/mailbox/armada-37xx-rwtm-mailbox.c. Signed-off-by: Marek Behún <marek.behun@nic.cz> Cc: Gregory Clement <gregory.clement@bootlin.com> Cc: Miquel Raynal <miquel.raynal@bootlin.com> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2019-08-27powerpc/spinlocks: Fix oops in __spin_yield() on bare metalChristopher M. Riedl
Booting w/ppc64le_defconfig + CONFIG_PREEMPT on bare metal results in the oops below due to calling into __spin_yield() when not running in an SPLPAR, which means lppaca pointers are NULL. We fixed a similar case previously in commit a6201da34ff9 ("powerpc: Fix oops due to bad access of lppaca on bare metal"), by adding SPLPAR checks in lppaca_shared_proc(). However when PREEMPT is enabled we can call __spin_yield() directly from arch_spin_yield(). To fix it add spin_yield() and rw_yield() which check that shared-processor LPAR is enabled before calling the SPLPAR-only implementation of each. BUG: Kernel NULL pointer dereference at 0x00000100 Faulting instruction address: 0xc000000000097f88 Oops: Kernel access of bad area, sig: 7 [#1] LE PAGE_SIZE=64K MMU=Radix MMU=Hash PREEMPT SMP NR_CPUS=2048 NUMA PowerNV Modules linked in: CPU: 0 PID: 2 Comm: kthreadd Not tainted 5.2.0-rc6-00491-g249155c20f9b #28 NIP: c000000000097f88 LR: c000000000c07a88 CTR: c00000000015ca10 REGS: c0000000727079f0 TRAP: 0300 Not tainted (5.2.0-rc6-00491-g249155c20f9b) MSR: 9000000002009033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE> CR: 84000424 XER: 20040000 CFAR: c000000000c07a84 DAR: 0000000000000100 DSISR: 00080000 IRQMASK: 1 GPR00: c000000000c07a88 c000000072707c80 c000000001546300 c00000007be38a80 GPR04: c0000000726f0c00 0000000000000002 c00000007279c980 0000000000000100 GPR08: c000000001581b78 0000000080000001 0000000000000008 c00000007279c9b0 GPR12: 0000000000000000 c000000001730000 c000000000142558 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR24: c00000007be38a80 c000000000c002f4 0000000000000000 0000000000000000 GPR28: c000000072221a00 c0000000726c2600 c00000007be38a80 c00000007be38a80 NIP [c000000000097f88] __spin_yield+0x48/0xa0 LR [c000000000c07a88] __raw_spin_lock+0xb8/0xc0 Call Trace: [c000000072707c80] [c000000072221a00] 0xc000000072221a00 (unreliable) [c000000072707cb0] [c000000000bffb0c] __schedule+0xbc/0x850 [c000000072707d70] [c000000000c002f4] schedule+0x54/0x130 [c000000072707da0] [c0000000001427dc] kthreadd+0x28c/0x2b0 [c000000072707e20] [c00000000000c1cc] ret_from_kernel_thread+0x5c/0x70 Instruction dump: 4d9e0020 552a043e 210a07ff 79080fe0 0b080000 3d020004 3908b878 794a1f24 e8e80000 7ce7502a e8e70000 38e70100 <7ca03c2c> 70a70001 78a50020 4d820020 ---[ end trace 474d6b2b8fc5cb7e ]--- Fixes: 499dcd41378e ("powerpc/64s: Allocate LPPACAs individually") Signed-off-by: Christopher M. Riedl <cmr@informatik.wtf> [mpe: Reword change log a bit] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190813031314.1828-4-cmr@informatik.wtf
2019-08-27MIPS: Octeon: remove duplicated include from dma-octeon.cYueHaibing
Remove duplicated include. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: <linux-mips@vger.kernel.org> Cc: <kernel-janitors@vger.kernel.org>
2019-08-27x86/boot/compressed/64: Fix missing initialization in ↵Kirill A. Shutemov
find_trampoline_placement() Gustavo noticed that 'new' can be left uninitialized if 'bios_start' happens to be less or equal to 'entry->addr + entry->size'. Initialize the variable at the begin of the iteration to the current value of 'bios_start'. Fixes: 0a46fff2f910 ("x86/boot/compressed/64: Fix boot on machines with broken E820 table") Reported-by: "Gustavo A. R. Silva" <gustavo@embeddedor.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190826133326.7cxb4vbmiawffv2r@box
2019-08-27KVM: PPC: Book3S HV: Don't lose pending doorbell request on migration on P9Paul Mackerras
On POWER9, when userspace reads the value of the DPDES register on a vCPU, it is possible for 0 to be returned although there is a doorbell interrupt pending for the vCPU. This can lead to a doorbell interrupt being lost across migration. If the guest kernel uses doorbell interrupts for IPIs, then it could malfunction because of the lost interrupt. This happens because a newly-generated doorbell interrupt is signalled by setting vcpu->arch.doorbell_request to 1; the DPDES value in vcpu->arch.vcore->dpdes is not updated, because it can only be updated when holding the vcpu mutex, in order to avoid races. To fix this, we OR in vcpu->arch.doorbell_request when reading the DPDES value. Cc: stable@vger.kernel.org # v4.13+ Fixes: 579006944e0d ("KVM: PPC: Book3S HV: Virtualize doorbell facility on POWER9") Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
2019-08-27KVM: PPC: Book3S HV: Check for MMU ready on piggybacked virtual coresPaul Mackerras
When we are running multiple vcores on the same physical core, they could be from different VMs and so it is possible that one of the VMs could have its arch.mmu_ready flag cleared (for example by a concurrent HPT resize) when we go to run it on a physical core. We currently check the arch.mmu_ready flag for the primary vcore but not the flags for the other vcores that will be run alongside it. This adds that check, and also a check when we select the secondary vcores from the preempted vcores list. Cc: stable@vger.kernel.org # v4.14+ Fixes: 38c53af85306 ("KVM: PPC: Book3S HV: Fix exclusion between HPT resizing and other HPT updates") Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2019-08-27powerpc/spinlocks: Rename SPLPAR-only spinlocksChristopher M. Riedl
The __rw_yield and __spin_yield locks only pertain to SPLPAR mode. Rename them to make this relationship obvious. Signed-off-by: Christopher M. Riedl <cmr@informatik.wtf> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190813031314.1828-3-cmr@informatik.wtf
2019-08-27powerpc/spinlocks: Refactor SHARED_PROCESSORChristopher M. Riedl
Determining if a processor is in shared processor mode is not a constant so don't hide it behind a #define. Signed-off-by: Christopher M. Riedl <cmr@informatik.wtf> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190813031314.1828-2-cmr@informatik.wtf
2019-08-27powerpc/64: optimise LOAD_REG_IMMEDIATE_SYM()Christophe Leroy
Optimise LOAD_REG_IMMEDIATE_SYM() using a temporary register to parallelise operations. It reduces the path from 5 to 3 instructions. Suggested-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bad41ed02531bb0382420cbab50a0d7153b71767.1566311636.git.christophe.leroy@c-s.fr
2019-08-27powerpc/32: replace LOAD_MSR_KERNEL() by LOAD_REG_IMMEDIATE()Christophe Leroy
LOAD_MSR_KERNEL() and LOAD_REG_IMMEDIATE() are doing the same thing in the same way. Drop LOAD_MSR_KERNEL() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8f04a6df0bc8949517fd8236d50c15008ccf9231.1566311636.git.christophe.leroy@c-s.fr
2019-08-27powerpc: rewrite LOAD_REG_IMMEDIATE() as an intelligent macroChristophe Leroy
Today LOAD_REG_IMMEDIATE() is a basic #define which loads all parts on a value into a register, including the parts that are NUL. This means always 2 instructions on PPC32 and always 5 instructions on PPC64. And those instructions cannot run in parallele as they are updating the same register. Ex: LOAD_REG_IMMEDIATE(r1,THREAD_SIZE) in head_64.S results in: 3c 20 00 00 lis r1,0 60 21 00 00 ori r1,r1,0 78 21 07 c6 rldicr r1,r1,32,31 64 21 00 00 oris r1,r1,0 60 21 40 00 ori r1,r1,16384 Rewrite LOAD_REG_IMMEDIATE() with GAS macro in order to skip the parts that are NUL. Rename existing LOAD_REG_IMMEDIATE() as LOAD_REG_IMMEDIATE_SYM() and use that one for loading value of symbols which are not known at compile time. Now LOAD_REG_IMMEDIATE(r1,THREAD_SIZE) in head_64.S results in: 38 20 40 00 li r1,16384 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d60ce8dd3a383c7adbfc322bf1d53d81724a6000.1566311636.git.christophe.leroy@c-s.fr
2019-08-27powerpc/mm: split out early ioremap path.Christophe Leroy
ioremap does things differently depending on whether SLAB is available or not at different levels. Try to separate the early path from the beginning. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3acd2dbe04b04f111475e7a59f2b6f2ab9b95ab6.1566309263.git.christophe.leroy@c-s.fr
2019-08-27powerpc/mm: refactor ioremap vm area setup.Christophe Leroy
PPC32 and PPC64 are doing the same once SLAB is available. Create a do_ioremap() function that calls get_vm_area and do the mapping. For PPC64, we add the 4K PFN hack sanity check to __ioremap_caller() in order to avoid using __ioremap_at(). Other checks in __ioremap_at() are irrelevant for __ioremap_caller(). On PPC64, VM area is allocated in the range [ioremap_bot ; IOREMAP_END] On PPC32, VM area is allocated in the range [VMALLOC_START ; VMALLOC_END] Lets define IOREMAP_START is ioremap_bot for PPC64, and alias IOREMAP_START/END to VMALLOC_START/END on PPC32 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/42e7e36ad32e0fdf76692426cc642799c9f689b8.1566309263.git.christophe.leroy@c-s.fr
2019-08-27powerpc/mm: refactor ioremap_range() and use ioremap_page_range()Christophe Leroy
book3s64's ioremap_range() is almost same as fallback ioremap_range(), except that it calls radix__ioremap_range() when radix is enabled. radix__ioremap_range() is also very similar to the other ones, expect that it calls ioremap_page_range when slab is available. PPC32 __ioremap_caller() have a loop doing the same thing as ioremap_range() so use it on PPC32 as well. Lets keep only one version of ioremap_range() which calls ioremap_page_range() on all platforms when slab is available. At the same time, drop the nid parameter which is not used. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4b1dca7096b01823b101be7338983578641547f1.1566309263.git.christophe.leroy@c-s.fr
2019-08-27powerpc/mm: Move ioremap functions out of pgtable_32/64.cChristophe Leroy
Create ioremap_32.c and ioremap_64.c and move respective ioremap functions out of pgtable_32.c and pgtable_64.c In the meantime, fix a few comments and changes a printk() to pr_warn(). Also fix a few oversplitted lines. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b5c8b02ccefd4ede64c61b53cf64fb5dacb35740.1566309263.git.christophe.leroy@c-s.fr
2019-08-27powerpc/mm: make ioremap_bot common to allChristophe Leroy
Drop multiple definitions of ioremap_bot and make one common to all subarches. Only CONFIG_PPC_BOOK3E_64 had a global static init value for ioremap_bot. Now ioremap_bot is set in early_init_mmu_global(). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/920eebfd9f36f14c79d1755847f5bf7c83703bdd.1566309262.git.christophe.leroy@c-s.fr
2019-08-27powerpc/mm: move ioremap_prot() into ioremap.cChristophe Leroy
Both ioremap_prot() are idenfical, move them into ioremap.c Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0b3eb0e0f1490a99fd6c983e166fb8946233f151.1566309262.git.christophe.leroy@c-s.fr
2019-08-27powerpc/mm: move common 32/64 bits ioremap functions into ioremap.cChristophe Leroy
ioremap(), ioremap_wc() and ioremap_coherent() are now identical on PPC32 and PPC64 as iowa_is_active() will always return false on PPC32. Move them into a new common location called ioremap.c Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6223803ce024d6ab4dfaa919f44098aed5b4bc33.1566309262.git.christophe.leroy@c-s.fr
2019-08-27powerpc/mm: rework io-workaround invocation.Christophe Leroy
ppc_md.ioremap() is only used for I/O workaround on CELL platform, so indirect function call can be avoided. This patch reworks the io-workaround and ioremap() functions to use the global 'io_workaround_inited' flag for the activation of io-workaround. When CONFIG_PPC_IO_WORKAROUNDS or CONFIG_PPC_INDIRECT_MMIO are not selected, the I/O workaround ioremap() voids and the global flag is not used. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5fa3ef069fbd0f152512afaae19e7a60161454cf.1566309262.git.christophe.leroy@c-s.fr
2019-08-27powerpc/mm: drop function __ioremap()Christophe Leroy
__ioremap() is not used anymore, drop it. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ccc439f481a0884e00a6be1bab44bab2a4477fea.1566309262.git.christophe.leroy@c-s.fr
2019-08-27powerpc/mm: drop ppc_md.iounmap() and __iounmap()Christophe Leroy
ppc_md.iounmap() is never set, drop it. Once ppc_md.iounmap() is gone, iounmap() remains the only user of __iounmap() and iounmap() does nothing else than calling __iounmap(). So drop iounmap() and make __iounmap() the new iounmap(). Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d73ba92bb7a387cc58cc34666d7f5158a45851b0.1566309262.git.christophe.leroy@c-s.fr
2019-08-27powerpc/ps3: replace __ioremap() by ioremap_prot()Christophe Leroy
__ioremap() is similar to ioremap_prot() except that ioremap_prot() does a few sanity changes in addition. The flags used by PS3 are not impacted by those changes so for PS3 both functions are equivalent. At the same time, drop parts of the comment that have been invalid since commit e58e87adc8bf ("powerpc/mm: Update _PAGE_KERNEL_RO") Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/36bff5d875ff562889c5e12dab63e5d7c5d1fbd8.1566309262.git.christophe.leroy@c-s.fr
2019-08-27powerpc: remove the ppc44x ocm.c fileChristoph Hellwig
The on chip memory allocator is entirely unused in the kernel tree. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7b1668941ad1041d08b19167030868de5840b153.1566309262.git.christophe.leroy@c-s.fr
2019-08-27powerpc/64: don't select ARCH_HAS_SCALED_CPUTIME on book3EChristophe Leroy
Book3E doesn't have SPRN_SPURR/SPRN_PURR. Activating ARCH_HAS_SCALED_CPUTIME is just wasting CPU time. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/linuxppc/issues/issues/171 Link: https://lore.kernel.org/r/a8b567c569aa521a7cf1beb061d43d79070e850c.1566492229.git.christophe.leroy@c-s.fr
2019-08-27powerpc/64s: support nospectre_v2 cmdline optionChristopher M. Riedl
Add support for disabling the kernel implemented spectre v2 mitigation (count cache flush on context switch) via the nospectre_v2 and mitigations=off cmdline options. Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Christopher M. Riedl <cmr@informatik.wtf> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190524024647.381-1-cmr@informatik.wtf
2019-08-27KVM: PPC: Book3S: Enable XIVE native capability only if OPAL has required ↵Paul Mackerras
functions There are some POWER9 machines where the OPAL firmware does not support the OPAL_XIVE_GET_QUEUE_STATE and OPAL_XIVE_SET_QUEUE_STATE calls. The impact of this is that a guest using XIVE natively will not be able to be migrated successfully. On the source side, the get_attr operation on the KVM native device for the KVM_DEV_XIVE_GRP_EQ_CONFIG attribute will fail; on the destination side, the set_attr operation for the same attribute will fail. This adds tests for the existence of the OPAL get/set queue state functions, and if they are not supported, the XIVE-native KVM device is not created and the KVM_CAP_PPC_IRQ_XIVE capability returns false. Userspace can then either provide a software emulation of XIVE, or else tell the guest that it does not have a XIVE controller available to it. Cc: stable@vger.kernel.org # v5.2+ Fixes: 3fab2d10588e ("KVM: PPC: Book3S HV: XIVE: Activate XIVE exploitation mode") Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2019-08-26xtensa: remove free_initrd_memMike Rapoport
The xtensa free_initrd_mem() verifies that initrd is mapped and then frees its memory using free_reserved_area(). The initrd is considered mapped when its memory was successfully reserved with mem_reserve(). Resetting initrd_start to 0 in case of mem_reserve() failure allows to switch to generic free_initrd_mem() implementation. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Message-Id: <1563977432-8376-1-git-send-email-rppt@linux.ibm.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2019-08-27KVM: PPC: Book3S: Fix incorrect guest-to-user-translation error handlingAlexey Kardashevskiy
H_PUT_TCE_INDIRECT handlers receive a page with up to 512 TCEs from a guest. Although we verify correctness of TCEs before we do anything with the existing tables, there is a small window when a check in kvmppc_tce_validate might pass and right after that the guest alters the page of TCEs, causing an early exit from the handler and leaving srcu_read_lock(&vcpu->kvm->srcu) (virtual mode) or lock_rmap(rmap) (real mode) locked. This fixes the bug by jumping to the common exit code with an appropriate unlock. Cc: stable@vger.kernel.org # v4.11+ Fixes: 121f80ba68f1 ("KVM: PPC: VFIO: Add in-kernel acceleration for VFIO") Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2019-08-26arm64: dts: sdm845: Add parent clock for rpmhccVinod Koul
RPM clock controller has parent as xo, so specify that in DT node for rpmhcc Reviewed-by: Stephen Boyd <sboyd@kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
2019-08-27arm64: dts: imx8mq: Add system counter nodeAnson Huang
Add i.MX8MQ system counter node to enable timer-imx-sysctr broadcast timer driver. Signed-off-by: Anson Huang <Anson.Huang@nxp.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2019-08-27arm64: dts: imx8mm: Add system counter nodeAnson Huang
Add i.MX8MM system counter node to enable timer-imx-sysctr broadcast timer driver. Signed-off-by: Anson Huang <Anson.Huang@nxp.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2019-08-26x86/apic: Include the LDR when clearing out APIC registersBandan Das
Although APIC initialization will typically clear out the LDR before setting it, the APIC cleanup code should reset the LDR. This was discovered with a 32-bit KVM guest jumping into a kdump kernel. The stale bits in the LDR triggered a bug in the KVM APIC implementation which caused the destination mapping for VCPUs to be corrupted. Note that this isn't intended to paper over the KVM APIC bug. The kernel has to clear the LDR when resetting the APIC registers except when X2APIC is enabled. This lacks a Fixes tag because missing to clear LDR goes way back into pre git history. [ tglx: Made x2apic_enabled a function call as required ] Signed-off-by: Bandan Das <bsd@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190826101513.5080-3-bsd@redhat.com