summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-06-03ASoC: Intel: Baytrail: add quirk for Aegex 10 (RU2) tabletKovács Tamás
This tablet has an incorrect acpi identifier just like Thinkpad10 tablet, which is why it is trying to load the RT5640 driver instead of the RT5762 driver. The RT5640 driver, on the other hand, checks the hardware ID, so no driver are loaded during boot. This fix resolves to load the RT5672 driver on this tablet during boot. It also provides the correct IO configuration, like the jack detect mode 3, for 1.8V pullup. I would like to thank Pierre-Louis Bossart for helping with this patch. Signed-off-by: Kovács Tamás <kepszlok@zohomail.eu> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-06-03xfs: inode btree scrubber should calculate im_boffset correctlyDarrick J. Wong
The im_boffset field is in units of bytes, whereas XFS_INO_OFFSET returns a value in units of inodes. Convert the units so that scrub on a 64k-block filesystem works correctly. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
2019-06-03arm64/mm: Move PTE_VALID from SW defined to HW page table entry definitionsAnshuman Khandual
PTE_VALID signifies that the last level page table entry is valid and it is MMU recognized while walking the page table. This is not a software defined PTE bit and should not be listed like one. Just move it to appropriate header file. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Steve Capper <steve.capper@arm.com> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-03arm64/hugetlb: Use macros for contiguous huge page sizesAnshuman Khandual
Replace all open encoded contiguous huge page size computations with available macro encodings CONT_PTE_SIZE and CONT_PMD_SIZE. There are other instances where these macros are used in the file and this change makes it consistently use the same mnemonic. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-06-03mmc: sdhci_am654: Fix SLOTTYPE writeFaiz Abbas
In the call to regmap_update_bits() for SLOTTYPE, the mask and value fields are exchanged. Fix this. Signed-off-by: Faiz Abbas <faiz_abbas@ti.com> Fixes: 41fd4caeb00b ("mmc: sdhci_am654: Add Initial Support for AM654 SDHCI driver") Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2019-06-03usb: typec: ucsi: ccg: fix memory leak in do_flashGustavo A. R. Silva
In case memory resources for *fw* were successfully allocated, release them before return. Addresses-Coverity-ID: 1445499 ("Resource leak") Fixes: 5c9ae5a87573 ("usb: typec: ucsi: ccg: add firmware flashing support") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-03habanalabs: Fix virtual address access via debugfs for 2MB pagesTomer Tayar
The debugfs interface for accessing DRAM virtual addresses currently uses the 12 LSBs of a virtual address as an offset. However, it should use the 20 LSBs in case the device MMU page size is 2MB instead of 4KB. This patch fixes the offset calculation to be based on the page size. Signed-off-by: Tomer Tayar <ttayar@habana.ai> Reviewed-by: Oded Gabbay <oded.gabbay@gmail.com> Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
2019-06-03drm/komeda: Constify the usage of komeda_component/pipeline/dev_funcsjames qian wang (Arm Technology China)
Depends on: - https://patchwork.freedesktop.org/series/58976/ - https://patchwork.freedesktop.org/series/59855/ Reported-by: Emil Velikov <emil.l.velikov@gmail.com> Signed-off-by: James Qian Wang (Arm Technology China) <james.qian.wang@arm.com> Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
2019-06-03Documentation/atomic_t.txt: Clarify pure non-rmw usagePeter Zijlstra
Clarify that pure non-RMW usage of atomic_t is pointless, there is nothing 'magical' about atomic_set() / atomic_read(). This is something that seems to confuse people, because I happen upon it semi-regularly. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190524115231.GN2623@hirez.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, s390/pci: Remove redundant castsMark Rutland
Now that atomic64_read() returns s64 consistently, we don't need to explicitly cast its return value. Drop the redundant casts. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-19-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, crypto/nx: Remove redundant castsMark Rutland
Now that atomic64_read() returns s64 consistently, we don't need to explicitly cast its return value. Drop the redundant casts. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-18-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic: Use s64 for atomic64_t on 64-bitMark Rutland
Now that all architectures use 64 consistently as the base type for the atomic64 API, let's have the CONFIG_64BIT definition of atomic64_t use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. On architectures where atomic64_read(v) is READ_ONCE(v->counter), this patch will cause the return type of atomic64_read() to be s64. As of this patch, the atomic64 API can be relied upon to consistently return s64 where a value rather than boolean condition is returned. This should make code more robust, and simpler, allowing for the removal of casts previously required to ensure consistent types. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-17-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, x86: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the x86 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long or long long, matching the generated headers. Note that the x86 arch_atomic64 implementation is already wrapped by the generic instrumented atomic64 implementation, which uses s64 consistently. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-16-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, sparc: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the sparc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: David S. Miller <davem@davemloft.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-15-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, s390: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the s390 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch. The s390-internal __atomic64_*() ops are also used by the s390 bitops, and expect pointers to long. Since atomic64_t::counter will be converted to s64 in a subsequent patch, pointes to this are explicitly cast to pointers to long when passed to __atomic64_*() ops. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-14-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, riscv: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the RISC-V atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-13-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, riscv: Fix atomic64_sub_if_positive() offset argumentMark Rutland
Presently the riscv implementation of atomic64_sub_if_positive() takes a 32-bit offset value rather than a 64-bit offset value as it should do. Thus, if called with a 64-bit offset, the value will be unexpectedly truncated to 32 bits. Fix this by taking the offset as a long rather than an int. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-12-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, powerpc: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the powerpc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Michael Ellerman <mpe@ellerman.id.au> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-11-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, mips: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the mips atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long or __s64, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long on 64-bit. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: James Hogan <jhogan@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paulus@samba.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-10-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, ia64: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the ia64 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long or __s64, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-9-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, arm64: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the arm64 atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch. Note that in arch_atomic64_dec_if_positive(), the x0 variable is left as long, as this variable is also used to hold the pointer to the atomic64_t. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-8-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, arm: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the arm atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long long, matching the generated headers. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-7-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, arc: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the arc atomic64 implementation use s64 as the underlying type for atomic64_t, rather than u64, matching the generated headers. Otherwise, there should be no functional change as a result of this patch. Acked-By: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Link: https://lkml.kernel.org/r/20190522132250.26499-6-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, alpha: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the alpha atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long, matching the generated headers. As atomic64_read() depends on the generic defintion of atomic64_t, this still returns long. This will be converted in a subsequent patch. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-5-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic: Use s64 for atomic64Mark Rutland
As a step towards making the atomic64 API use consistent types treewide, let's have the generic atomic64 implementation use s64 as the underlying type for atomic64_t, rather than long long, matching the generated headers. Otherwise, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-4-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, s390/pci: Prepare for atomic64_read() conversionMark Rutland
The return type of atomic64_read() varies by architecture. It may return long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is somewhat painful, and mandates the use of explicit casts in some cases (e.g. when printing the return value). To ameliorate matters, subsequent patches will make the atomic64 API consistently use s64. As a preparatory step, this patch updates the s390 pci debug code to treat the return value of atomic64_read() as s64, using an explicit cast. This cast will be removed once the s64 conversion is complete. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: herbert@gondor.apana.org.au Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-3-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/atomic, crypto/nx: Prepare for atomic64_read() conversionMark Rutland
The return type of atomic64_read() varies by architecture. It may return long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is somewhat painful, and mandates the use of explicit casts in some cases (e.g. when printing the return value). To ameliorate matters, subsequent patches will make the atomic64 API consistently use s64. As a preparatory step, this patch updates the nx-842 code to treat the return value of atomic64_read() as s64, using explicit casts. These casts will be removed once the s64 conversion is complete. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: aou@eecs.berkeley.edu Cc: arnd@arndb.de Cc: bp@alien8.de Cc: catalin.marinas@arm.com Cc: davem@davemloft.net Cc: fenghua.yu@intel.com Cc: heiko.carstens@de.ibm.com Cc: ink@jurassic.park.msu.ru Cc: jhogan@kernel.org Cc: linux@armlinux.org.uk Cc: mattst88@gmail.com Cc: mpe@ellerman.id.au Cc: palmer@sifive.com Cc: paul.burton@mips.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: rth@twiddle.net Cc: tony.luck@intel.com Cc: vgupta@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-2-mark.rutland@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lock_events: Use raw_cpu_{add,inc}() for statsPeter Zijlstra
Instead of playing silly games with CONFIG_DEBUG_PREEMPT toggling between this_cpu_*() and __this_cpu_*() use raw_cpu_*(), which is exactly what we want here. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: huang ying <huang.ying.caritas@gmail.com> Link: https://lkml.kernel.org/r/20190527082326.GP2623@hirez.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Fix merging of hlocks with non-zero referencesImre Deak
The sequence static DEFINE_WW_CLASS(test_ww_class); struct ww_acquire_ctx ww_ctx; struct ww_mutex ww_lock_a; struct ww_mutex ww_lock_b; struct ww_mutex ww_lock_c; struct mutex lock_c; ww_acquire_init(&ww_ctx, &test_ww_class); ww_mutex_init(&ww_lock_a, &test_ww_class); ww_mutex_init(&ww_lock_b, &test_ww_class); ww_mutex_init(&ww_lock_c, &test_ww_class); mutex_init(&lock_c); ww_mutex_lock(&ww_lock_a, &ww_ctx); mutex_lock(&lock_c); ww_mutex_lock(&ww_lock_b, &ww_ctx); ww_mutex_lock(&ww_lock_c, &ww_ctx); mutex_unlock(&lock_c); (*) ww_mutex_unlock(&ww_lock_c); ww_mutex_unlock(&ww_lock_b); ww_mutex_unlock(&ww_lock_a); ww_acquire_fini(&ww_ctx); (**) will trigger the following error in __lock_release() when calling mutex_release() at **: DEBUG_LOCKS_WARN_ON(depth <= 0) The problem is that the hlock merging happening at * updates the references for test_ww_class incorrectly to 3 whereas it should've updated it to 4 (representing all the instances for ww_ctx and ww_lock_[abc]). Fix this by updating the references during merging correctly taking into account that we can have non-zero references (both for the hlock that we merge into another hlock or for the hlock we are merging into). Signed-off-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: =?UTF-8?q?Ville=20Syrj=C3=A4l=C3=A4?= <ville.syrjala@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Link: https://lkml.kernel.org/r/20190524201509.9199-2-imre.deak@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Fix OOO unlock when hlocks need mergingImre Deak
The sequence static DEFINE_WW_CLASS(test_ww_class); struct ww_acquire_ctx ww_ctx; struct ww_mutex ww_lock_a; struct ww_mutex ww_lock_b; struct mutex lock_c; struct mutex lock_d; ww_acquire_init(&ww_ctx, &test_ww_class); ww_mutex_init(&ww_lock_a, &test_ww_class); ww_mutex_init(&ww_lock_b, &test_ww_class); mutex_init(&lock_c); ww_mutex_lock(&ww_lock_a, &ww_ctx); mutex_lock(&lock_c); ww_mutex_lock(&ww_lock_b, &ww_ctx); mutex_unlock(&lock_c); (*) ww_mutex_unlock(&ww_lock_b); ww_mutex_unlock(&ww_lock_a); ww_acquire_fini(&ww_ctx); triggers the following WARN in __lock_release() when doing the unlock at *: DEBUG_LOCKS_WARN_ON(curr->lockdep_depth != depth - 1); The problem is that the WARN check doesn't take into account the merging of ww_lock_a and ww_lock_b which results in decreasing curr->lockdep_depth by 2 not only 1. Note that the following sequence doesn't trigger the WARN, since there won't be any hlock merging. ww_acquire_init(&ww_ctx, &test_ww_class); ww_mutex_init(&ww_lock_a, &test_ww_class); ww_mutex_init(&ww_lock_b, &test_ww_class); mutex_init(&lock_c); mutex_init(&lock_d); ww_mutex_lock(&ww_lock_a, &ww_ctx); mutex_lock(&lock_c); mutex_lock(&lock_d); ww_mutex_lock(&ww_lock_b, &ww_ctx); mutex_unlock(&lock_d); ww_mutex_unlock(&ww_lock_b); ww_mutex_unlock(&ww_lock_a); mutex_unlock(&lock_c); ww_acquire_fini(&ww_ctx); In general both of the above two sequences are valid and shouldn't trigger any lockdep warning. Fix this by taking the decrement due to the hlock merging into account during lock release and hlock class re-setting. Merging can't happen during lock downgrading since there won't be a new possibility to merge hlocks in that case, so add a WARN if merging still happens then. Signed-off-by: Imre Deak <imre.deak@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: ville.syrjala@linux.intel.com Link: https://lkml.kernel.org/r/20190524201509.9199-1-imre.deak@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03s390/cio: Remove vfio-ccw checks of command codesEric Farman
If the CCW being processed is a No-Operation, then by definition no data is being transferred. Let's fold those checks into the normal CCW processors, rather than skipping out early. Likewise, if the CCW being processed is a "test" (a category defined here as an opcode that contains zero in the lowest four bits) then no special processing is necessary as far as vfio-ccw is concerned. These command codes have not been valid since the S/370 days, meaning they are invalid in the same way as one that ends in an eight [1] or an otherwise valid command code that is undefined for the device type in question. Considering that, let's just process "test" CCWs like any other CCW, and send everything to the hardware. [1] POPS states that a x08 is a TIC CCW, and that having any high-order bits enabled is invalid for format-1 CCWs. For format-0 CCWs, the high-order bits are ignored. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190516161403.79053-4-farman@linux.ibm.com> Acked-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Allow zero-length CCWs in vfio-ccwEric Farman
It is possible that a guest might issue a CCW with a length of zero, and will expect a particular response. Consider this chain: Address Format-1 CCW -------- ----------------- 0 33110EC0 346022CC 33177468 1 33110EC8 CF200000 3318300C CCW[0] moves a little more than two pages, but also has the Suppress Length Indication (SLI) bit set to handle the expectation that considerably less data will be moved. CCW[1] also has the SLI bit set, and has a length of zero. Once vfio-ccw does its magic, the kernel issues a start subchannel on behalf of the guest with this: Address Format-1 CCW -------- ----------------- 0 021EDED0 346422CC 021F0000 1 021EDED8 CF240000 3318300C Both CCWs were converted to an IDAL and have the corresponding flags set (which is by design), but only the address of the first data address is converted to something the host is aware of. The second CCW still has the address used by the guest, which happens to be (A) (probably) an invalid address for the host, and (B) an invalid IDAW address (doubleword boundary, etc.). While the I/O fails, it doesn't fail correctly. In this example, we would receive a program check for an invalid IDAW address, instead of a unit check for an invalid command. To fix this, revert commit 4cebc5d6a6ff ("vfio: ccw: validate the count field of a ccw before pinning") and allow the individual fetch routines to process them like anything else. We'll make a slight adjustment to our allocation of the pfn_array (for direct CCWs) or IDAL (for IDAL CCWs) memory, so that we have room for at least one address even though no guest memory will be pinned and thus the IDAW will not be populated with a host address. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190516161403.79053-3-farman@linux.ibm.com> Acked-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Don't pin vfio pages for empty transfersEric Farman
The skip flag of a CCW offers the possibility of data not being transferred, but is only meaningful for certain commands. Specifically, it is only applicable for a read, read backward, sense, or sense ID CCW and will be ignored for any other command code (SA22-7832-11 page 15-64, and figure 15-30 on page 15-75). (A sense ID is xE4, while a sense is x04 with possible modifiers in the upper four bits. So we will cover the whole "family" of sense CCWs.) For those scenarios, since there is no requirement for the target address to be valid, we should skip the call to vfio_pin_pages() and rely on the IDAL address we have allocated/built for the channel program. The fact that the individual IDAWs within the IDAL are invalid is fine, since they aren't actually checked in these cases. Set pa_nr to zero when skipping the pfn_array_pin() call, since it is defined as the number of pages pinned and is used to determine whether to call vfio_unpin_pages() upon cleanup. The pfn_array_pin() routine returns the number of pages that were pinned, but now might be skipped for some CCWs. Thus we need to calculate the expected number of pages ourselves such that we are guaranteed to allocate a reasonable number of IDAWs, which will provide a valid address in CCW.CDA regardless of whether the IDAWs are filled in with pinned/translated addresses or not. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190516161403.79053-2-farman@linux.ibm.com> Acked-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Initialize the host addresses in pfn_arrayEric Farman
Let's initialize the host address to something that is invalid, rather than letting it default to zero. This just makes it easier to notice when a pin operation has failed or been skipped. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190514234248.36203-5-farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Split pfn_array_alloc_pin into piecesEric Farman
The pfn_array_alloc_pin routine is doing too much. Today, it does the alloc of the pfn_array struct and its member arrays, builds the iova address lists out of a contiguous piece of guest memory, and asks vfio to pin the resulting pages. Let's effectively revert a significant portion of commit 5c1cfb1c3948 ("vfio: ccw: refactor and improve pfn_array_alloc_pin()") such that we break pfn_array_alloc_pin() into its component pieces, and have one routine that allocates/populates the pfn_array structs, and another that actually pins the memory. In the future, we will be able to handle scenarios where pinning memory isn't actually appropriate. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190514234248.36203-4-farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Set vfio-ccw FSM state before ioeventfdEric Farman
Otherwise, the guest can believe it's okay to start another I/O and bump into the non-idle state. This results in a cc=2 (with the asynchronous CSCH/HSCH code) returned to the guest, which is unfortunate since everything is otherwise working normally. Signed-off-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Message-Id: <20190514234248.36203-3-farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03s390/cio: Update SCSW if it points to the end of the chainEric Farman
Per the POPs [1], when processing an interrupt the SCSW.CPA field of an IRB generally points to 8 bytes after the last CCW that was executed (there are exceptions, but this is the most common behavior). In the case of an error, this points us to the first un-executed CCW in the chain. But in the case of normal I/O, the address points beyond the end of the chain. While the guest generally only cares about this when possibly restarting a channel program after error recovery, we should convert the address even in the good scenario so that we provide a consistent, valid, response upon I/O completion. [1] Figure 16-6 in SA22-7832-11. The footnotes in that table also state that this is true even if the resulting address is invalid or protected, but moving to the end of the guest chain should not be a surprise. Signed-off-by: Eric Farman <farman@linux.ibm.com> Message-Id: <20190514234248.36203-2-farman@linux.ibm.com> Reviewed-by: Farhan Ali <alifm@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2019-06-03x86/power: Fix 'nosmt' vs hibernation triple fault during resumeJiri Kosina
As explained in 0cc3cd21657b ("cpu/hotplug: Boot HT siblings at least once") we always, no matter what, have to bring up x86 HT siblings during boot at least once in order to avoid first MCE bringing the system to its knees. That means that whenever 'nosmt' is supplied on the kernel command-line, all the HT siblings are as a result sitting in mwait or cpudile after going through the online-offline cycle at least once. This causes a serious issue though when a kernel, which saw 'nosmt' on its commandline, is going to perform resume from hibernation: if the resume from the hibernated image is successful, cr3 is flipped in order to point to the address space of the kernel that is being resumed, which in turn means that all the HT siblings are all of a sudden mwaiting on address which is no longer valid. That results in triple fault shortly after cr3 is switched, and machine reboots. Fix this by always waking up all the SMT siblings before initiating the 'restore from hibernation' process; this guarantees that all the HT siblings will be properly carried over to the resumed kernel waiting in resume_play_dead(), and acted upon accordingly afterwards, based on the target kernel configuration. Symmetricaly, the resumed kernel has to push the SMT siblings to mwait again in case it has SMT disabled; this means it has to online all the siblings when resuming (so that they come out of hlt) and offline them again to let them reach mwait. Cc: 4.19+ <stable@vger.kernel.org> # v4.19+ Debugged-by: Thomas Gleixner <tglx@linutronix.de> Fixes: 0cc3cd21657b ("cpu/hotplug: Boot HT siblings at least once") Signed-off-by: Jiri Kosina <jkosina@suse.cz> Acked-by: Pavel Machek <pavel@ucw.cz> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-06-03locking/lockdep: Remove !dir in lock irq usage checkYuyang Du
In mark_lock_irq(), the following checks are performed: ---------------------------------- | -> | unsafe | read unsafe | |----------------------------------| | safe | F B | F* B* | |----------------------------------| | read safe | F? B* | - | ---------------------------------- Where: F: check_usage_forwards B: check_usage_backwards *: check enabled by STRICT_READ_CHECKS ?: check enabled by the !dir condition From checking point of view, the special F? case does not make sense, whereas it perhaps is made for peroformance concern. As later patch will address this issue, remove this exception, which makes the checks consistent later. With STRICT_READ_CHECKS = 1 which is default, there is no functional change. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-24-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Adjust new bit cases in mark_lockYuyang Du
The new bit can be any possible lock usage except it is garbage, so the cases in switch can be made simpler. Warn early on if wrong usage bit is passed without taking locks. No functional change. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-23-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Consolidate lock usage bit initializationYuyang Du
Lock usage bit initialization is consolidated into one function mark_usage(). Trivial readability improvement. No functional change. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-22-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Check redundant dependency only when CONFIG_LOCKDEP_SMALLYuyang Du
As Peter has put it all sound and complete for the cause, I simply quote: "It (check_redundant) was added for cross-release (which has since been reverted) which would generate a lot of redundant links (IIRC) but having it makes the reports more convoluted -- basically, if we had an A-B-C relation, then A-C will not be added to the graph because it is already covered. This then means any report will include B, even though a shorter cycle might have been possible." This would increase the number of direct dependencies. For a simple workload (make clean; reboot; make vmlinux -j8), the data looks like this: CONFIG_LOCKDEP_SMALL: direct dependencies: 6926 !CONFIG_LOCKDEP_SMALL: direct dependencies: 9052 (+30.7%) Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-21-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Refactorize check_noncircular and check_redundantYuyang Du
These two functions now handle different check results themselves. A new check_path function is added to check whether there is a path in the dependency graph. No functional change. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-20-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Remove unused argument in __lock_releaseYuyang Du
The @nested is not used in __release_lock so remove it despite that it is not used in lock_release in the first place. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-19-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Remove redundant argument in check_deadlockYuyang Du
In check_deadlock(), the third argument read comes from the second argument hlock so that it can be removed. No functional change. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-18-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Add explanation to lock usage rules in lockdep design docYuyang Du
The irq usage and lock dependency rules that if violated a deacklock may happen are explained in more detail. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-17-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Update comments on dependency searchYuyang Du
The breadth-first search is implemented as flat-out non-recursive now, but the comments are still describing it as recursive, update the comments in that regard. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-16-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Avoid constant checks in __bfs by using offset referenceYuyang Du
In search of a dependency in the lock graph, there is contant checks for forward or backward search. Directly reference the field offset of the struct that differentiates the type of search to avoid those checks. No functional change. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-15-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Change the return type of __cq_dequeue()Yuyang Du
With the change, we can slightly adjust the code to iterate the queue in BFS search, which simplifies the code. No functional change. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bvanassche@acm.org Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-14-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-03locking/lockdep: Change type of the element field in circular_queueYuyang Du
The element field is an array in struct circular_queue to keep track of locks in the search. Making it the same type as the locks avoids type cast. Also fix a typo and elaborate the comment above struct circular_queue. No functional change. Signed-off-by: Yuyang Du <duyuyang@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: frederic@kernel.org Cc: ming.lei@redhat.com Cc: will.deacon@arm.com Link: https://lkml.kernel.org/r/20190506081939.74287-13-duyuyang@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>