summaryrefslogtreecommitdiff
path: root/arch/mips/include/asm/cmpxchg.h
AgeCommit message (Collapse)Author
2023-04-29locking/arch: Rename all internal __xchg() names to __arch_xchg()Andrzej Hajda
Decrease the probability of this internal facility to be used by driver code. Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Palmer Dabbelt <palmer@rivosinc.com> [riscv] Link: https://lore.kernel.org/r/20230118154450.73842-1-andrzej.hajda@intel.com Cc: Linus Torvalds <torvalds@linux-foundation.org>
2022-01-05MIPS: retire "asm/llsc.h"Huang Pei
all that "asm/llsc.h" does is just to help inline asm, which can be stringifyed from "asm/asm.h" +. Since "asm/asm.h" has all we need, retire "asm/llsc.h" +. remove unused header file Inspired-by: Maciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: Huang Pei <huangpei@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-10-24MIPS: Fix assembly error from MIPSr2 code used within MIPS_ISA_ARCH_LEVELMaciej W. Rozycki
Fix assembly errors like: {standard input}: Assembler messages: {standard input}:287: Error: opcode not supported on this processor: mips3 (mips3) `dins $10,$7,32,32' {standard input}:680: Error: opcode not supported on this processor: mips3 (mips3) `dins $10,$7,32,32' {standard input}:1274: Error: opcode not supported on this processor: mips3 (mips3) `dins $12,$9,32,32' {standard input}:2175: Error: opcode not supported on this processor: mips3 (mips3) `dins $10,$7,32,32' make[1]: *** [scripts/Makefile.build:277: mm/highmem.o] Error 1 with code produced from `__cmpxchg64' for MIPS64r2 CPU configurations using CONFIG_32BIT and CONFIG_PHYS_ADDR_T_64BIT. This is due to MIPS_ISA_ARCH_LEVEL downgrading the assembly architecture to `r4000' i.e. MIPS III for MIPS64r2 configurations, while there is a block of code containing a DINS MIPS64r2 instruction conditionalized on MIPS_ISA_REV >= 2 within the scope of the downgrade. The assembly architecture override code pattern has been put there for LL/SC instructions, so that code compiles for configurations that select a processor to build for that does not support these instructions while still providing run-time support for processors that do, dynamically switched by non-constant `cpu_has_llsc'. It went in with linux-mips.org commit aac8aa7717a2 ("Enable a suitable ISA for the assembler around ll/sc so that code builds even for processors that don't support the instructions. Plus minor formatting fixes.") back in 2005. Fix the problem by wrapping these instructions along with the adjacent SYNC instructions only, following the practice established with commit cfd54de3b0e4 ("MIPS: Avoid move psuedo-instruction whilst using MIPS_ISA_LEVEL") and commit 378ed6f0e3c5 ("MIPS: Avoid using .set mips0 to restore ISA"). Strictly speaking the SYNC instructions do not have to be wrapped as they are only used as a Loongson3 erratum workaround, so they will be enabled in the assembler by default, but do this so as to keep code consistent with other places. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Fixes: c7e2d71dda7a ("MIPS: Fix set_pte() for Netlogic XLR using cmpxchg64()") Cc: stable@vger.kernel.org # v5.1+ Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-05-26locking/atomic: mips: move to ARCH_ATOMICMark Rutland
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates mips to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210525140232.53872-23-mark.rutland@arm.com
2021-05-26locking/atomic: cmpxchg: make `generic` a prefixMark Rutland
The asm-generic implementations of cmpxchg_local() and cmpxchg64_local() use a `_generic` suffix to distinguish themselves from arch code or wrappers used elsewhere. Subsequent patches will add ARCH_ATOMIC support to these implementations, and will distinguish more functions with a `generic` portion. To align with how ARCH_ATOMIC uses an `arch_` prefix, it would be helpful to use a `generic_` prefix rather than a `_generic` suffix. In preparation for this, this patch renames the existing functions to make `generic` a prefix rather than a suffix. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210525140232.53872-12-mark.rutland@arm.com
2021-01-15MIPS: Compare __SYNC_loongson3_war against 0Nathan Chancellor
When building with clang when CONFIG_CPU_LOONGSON3_WORKAROUNDS is enabled: In file included from lib/errseq.c:4: In file included from ./include/linux/atomic.h:7: ./arch/mips/include/asm/atomic.h:52:1: warning: converting the result of '<<' to a boolean always evaluates to true [-Wtautological-constant-compare] ATOMIC_OPS(atomic64, s64) ^ ./arch/mips/include/asm/atomic.h:40:9: note: expanded from macro 'ATOMIC_OPS' return cmpxchg(&v->counter, o, n); ^ ./arch/mips/include/asm/cmpxchg.h:194:7: note: expanded from macro 'cmpxchg' if (!__SYNC_loongson3_war) ^ ./arch/mips/include/asm/sync.h:147:34: note: expanded from macro '__SYNC_loongson3_war' # define __SYNC_loongson3_war (1 << 31) ^ While it is not wrong that the result of this shift is always true in a boolean context, it is not a problem here. Regardless, the warning is really noisy so rather than making the shift a boolean implicitly, use it in an equality comparison so the shift is used as an integer value. Fixes: 4d1dbfe6cbec ("MIPS: atomic: Emit Loongson3 sync workarounds within asm") Fixes: a91f2a1dba44 ("MIPS: cmpxchg: Omit redundant barriers for Loongson3") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2019-11-01Merge tag 'mips_fixes_5.4_3' into mips-nextPaul Burton
Pull in mips-fixes primarily to gain build fixes in order to allow better testing of mips-next. A few MIPS fixes: - Fix VDSO time-related function behavior for systems where we need to fall back to syscalls, but were instead returning bogus results. - A fix to TLB exception handlers for Cavium Octeon systems where they would inadvertently clobber the $1/$at register. - A build fix for bcm63xx configurations. - Switch to using my @kernel.org email address. Signed-off-by: Paul Burton <paulburton@kernel.org>
2019-10-09MIPS: include: Mark __xchg as __always_inlineThomas Bogendoerfer
Commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") allows compiler to uninline functions marked as 'inline'. In cace of __xchg this would cause to reference function __xchg_called_with_bad_pointer, which is an error case for catching bugs and will not happen for correct code, if __xchg is inlined. Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org
2019-10-07MIPS: cmpxchg: Omit redundant barriers for Loongson3Paul Burton
When building a kernel configured to support Loongson3 LL/SC workarounds (ie. CONFIG_CPU_LOONGSON3_WORKAROUNDS=y) the inline assembly in __xchg_asm() & __cmpxchg_asm() already emits completion barriers, and as such we don't need to emit extra barriers from the xchg() or cmpxchg() macros. Add compile-time constant checks causing us to omit the redundant memory barriers. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
2019-10-07MIPS: cmpxchg: Emit Loongson3 sync workarounds within asmPaul Burton
Generate the sync instructions required to workaround Loongson3 LL/SC errata within inline asm blocks, which feels a little safer than doing it from C where strictly speaking the compiler would be well within its rights to insert a memory access between the separate asm statements we previously had, containing sync & ll instructions respectively. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
2019-10-07MIPS: Unify sc beqz definitionPaul Burton
We currently duplicate the definition of __scbeqz in asm/atomic.h & asm/cmpxchg.h. Move it to asm/llsc.h & rename it to __SC_BEQZ to fit better with the existing __SC macro provided there. We include a tab in the string in order to avoid the need for users to indent code any further to include whitespace of their own after the instruction mnemonic. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
2019-10-07MIPS: include: Mark __cmpxchg as __always_inlineThomas Bogendoerfer
Commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") allows compiler to uninline functions marked as 'inline'. In cace of cmpxchg this would cause to reference function __cmpxchg_called_with_bad_pointer, which is a error case for catching bugs and will not happen for correct code, if __cmpxchg is inlined. Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> [paul.burton@mips.com: s/__cmpxchd/__cmpxchg in subject] Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org
2019-08-31mips/atomic: Fix smp_mb__{before,after}_atomic()Peter Zijlstra
Recent probing at the Linux Kernel Memory Model uncovered a 'surprise'. Strongly ordered architectures where the atomic RmW primitive implies full memory ordering and smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS without WEAK_REORDERING_BEYOND_LLSC) fail for: *x = 1; atomic_inc(u); smp_mb__after_atomic(); r0 = *y; Because, while the atomic_inc() implies memory order, it (surprisingly) does not provide a compiler barrier. This then allows the compiler to re-order like so: atomic_inc(u); *x = 1; smp_mb__after_atomic(); r0 = *y; Which the CPU is then allowed to re-order (under TSO rules) like: atomic_inc(u); r0 = *y; *x = 1; And this very much was not intended. Therefore strengthen the atomic RmW ops to include a compiler barrier. Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul Burton <paul.burton@mips.com>
2019-08-31mips/atomic: Fix loongson_llsc_mb() wreckagePeter Zijlstra
The comment describing the loongson_llsc_mb() reorder case doesn't make any sense what so ever. Instruction re-ordering is not an SMP artifact, but rather a CPU local phenomenon. Clarify the comment by explaining that these issue cause a coherence fail. For the branch speculation case; if futex_atomic_cmpxchg_inatomic() needs one at the bne branch target, then surely the normal __cmpxch_asm() implementation does too. We cannot rely on the barriers from cmpxchg() because cmpxchg_local() is implemented with the same macro, and branch prediction and speculation are, too, CPU local. Fixes: e02e07e3127d ("MIPS: Loongson: Introduce and use loongson_llsc_mb()") Cc: Huacai Chen <chenhc@lemote.com> Cc: Huang Pei <huangpei@loongson.cn> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul Burton <paul.burton@mips.com>
2019-08-31mips/atomic: Fix cmpxchg64 barriersPeter Zijlstra
There were no memory barriers on the 32bit implementation of cmpxchg64(). Fix this. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul Burton <paul.burton@mips.com>
2019-02-06MIPS: Fix set_pte() for Netlogic XLR using cmpxchg64()Paul Burton
Commit 46011e6ea392 ("MIPS: Make set_pte() SMP safe.") introduced an open-coded version of cmpxchg() within set_pte(), that always operated on a value the size of an unsigned long. That is, it used ll/sc instructions when CONFIG_32BIT=y or lld/scd instructions when CONFIG_64BIT=y. This was broken for configurations in which pte_t is larger than an unsigned long (with the exception of XPA configurations which have a different implementation of set_pte()), because we no longer update the whole PTE. Indeed commit 46011e6ea392 ("MIPS: Make set_pte() SMP safe.") notes: > The case of CONFIG_64BIT_PHYS_ADDR && CONFIG_CPU_MIPS32 is *not* > handled. In practice this affects Netlogic XLR/XLS systems including nlm_xlr_defconfig. Commit 82f4f66ddf11 ("MIPS: Remove open-coded cmpxchg() in set_pte()") then replaced this open-coded version of cmpxchg() with an actual call to cmpxchg(). Unfortunately the configurations mentioned above then fail to build because cmpxchg() can only operate on values 32 bits or smaller in size, resulting in: arch/mips/include/asm/cmpxchg.h:166:11: error: call to '__cmpxchg_called_with_bad_pointer' declared with attribute error: Bad argument size for cmpxchg One option that would fix the build failure & restore the previous behaviour would be to cast the pte pointer to a pointer to unsigned long, so that cmpxchg() would operate on just 32 bits of the PTE as it has been since commit 46011e6ea392 ("MIPS: Make set_pte() SMP safe."). That feels like an ugly hack though, and the behaviour of set_pte() is likely a little broken. Instead we take advantage of the fact that the affected configurations already know at compile time that the CPU will support 64 bits (ie. have hardcoded cpu_has_64bits in cpu-feature-overrides.h) in order to allow cmpxchg64() to be used in these configurations. set_pte() then makes use of cmpxchg64() when necessary. Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: 46011e6ea392 ("MIPS: Make set_pte() SMP safe.") Fixes: 82f4f66ddf11 ("MIPS: Remove open-coded cmpxchg() in set_pte()")
2018-11-09MIPS: Avoid using .set mips0 to restore ISAPaul Burton
We currently have 2 commonly used methods for switching ISA within assembly code, then restoring the original ISA. 1) Using a pair of .set push & .set pop directives. For example: .set push .set mips32r2 <some_insn> .set pop 2) Using .set mips0 to restore the ISA originally specified on the command line. For example: .set mips32r2 <some_insn> .set mips0 Unfortunately method 2 does not work with nanoMIPS toolchains, where the assembler rejects the .set mips0 directive like so: Error: cannot change ISA from nanoMIPS to mips0 In preparation for supporting nanoMIPS builds, switch all instances of method 2 in generic non-platform-specific code to use push & pop as in method 1 instead. The .set push & .set pop is arguably cleaner anyway, and if nothing else it's good to consistently use one method. Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/21037/ Cc: linux-mips@linux-mips.org
2017-11-15Merge tag 'mips_4.15' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/mips Pull MIPS updates from James Hogan: "These are the main MIPS changes for 4.15. Fixes: - ralink: Fix MT7620 PCI build issues (4.5) - Disable cmpxchg64() and HAVE_VIRT_CPU_ACCOUNTING_GEN for 32-bit SMP (4.1) - Fix MIPS64 FP save/restore on 32-bit kernels (4.0) - ptrace: Pick up ptrace/seccomp changed syscall numbers (3.19) - ralink: Fix MT7628 pinmux (3.19) - BCM47XX: Fix LED inversion on WRT54GSv1 (3.17) - Fix n32 core dumping as o32 since regset support (3.13) - ralink: Drop obsolete USB_ARCH_HAS_HCD select Build system: - Default to "generic" (multiplatform) system type instead of IP22 - Use generic little endian MIPS32 r2 configuration as default defconfig instead of ip22_defconfig FPU emulation: - Fix exception generation for certain R6 FPU instructions SMP: - Allow __cpu_number_map to be larger than NR_CPUS for sparse CPU id spaces Miscellaneous: - Add iomem resource for kernel bss section for kexec/kdump - Atomics: Nudge writes on bit unlock - DT files: Standardise "ok" -> "okay" Minor cleanups: - Define virt_to_pfn() - Make thread_saved_pc static - Simplify 32-bit sign extension in __read_64bit_c0_split() - DMA: Use vma_pages() helper - FPU emulation: Replace unsigned with unsigned int - MM: Removed unused lastpfn - Alchemy: Make clk_ops const - Lasat: Use setup_timer() helper - ralink: Use BIT() in MT7620 PCI driver Platform support: BMIPS: - Enable HARDIRQS_SW_RESEND Broadcom BCM63XX: - Add clkdev lookup support - Update clk driver, UART driver, DTs to handle named refclk from DTs - Split apart various clocks to more closely match hardware - Add ethernet clocks Cavium Octeon: - Remove usage of cvmx_wait() in favour of __delay() ImgTec Pistachio: - DT: Drop deprecated dwmmc num-slots property Ingenic JZ4780: - Add NFS root to Ci20 defconfig - Add watchdog to Ci20 DT & defconfig, and allow building of watchdog driver with this SoC Generic (multiplatform): - Migrate xilfpga (MIPSfpga) platform to the generic platform Lantiq xway: - Fix ASC0/ASC1 clocks" * tag 'mips_4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/mips: (46 commits) MIPS: Add iomem resource for kernel bss section. MIPS: cmpxchg64() and HAVE_VIRT_CPU_ACCOUNTING_GEN don't work for 32-bit SMP MIPS: BMIPS: Enable HARDIRQS_SW_RESEND MIPS: pci: Make use of the BIT() macro inside the mt7620 driver MIPS: pci: Remove KERN_WARN instance inside the mt7620 driver MIPS: pci: Remove duplicate define in mt7620 driver MIPS: ralink: Fix typo in mt7628 pinmux function MIPS: ralink: Fix MT7628 pinmux MIPS: Fix odd fp register warnings with MIPS64r2 watchdog: jz4780: Allow selection of jz4740-wdt driver MIPS/ptrace: Update syscall nr on register changes MIPS/ptrace: Pick up ptrace/seccomp changed syscalls MIPS: Fix an n32 core file generation regset support regression MIPS: Fix MIPS64 FP save/restore on 32-bit kernels MIPS: page.h: Define virt_to_pfn() MIPS: Xilfpga: Switch to using generic defconfigs MIPS: generic: Add support for MIPSfpga MIPS: Set defconfig target to a generic system for 32r2el MIPS: Kconfig: Set default MIPS system type as generic MIPS: DTS: Remove num-slots from Pistachio SoC ...
2017-11-13MIPS: cmpxchg64() and HAVE_VIRT_CPU_ACCOUNTING_GEN don't work for 32-bit SMPBen Hutchings
__cmpxchg64_local_generic() is atomic only w.r.t tasks and interrupts on the same CPU (that's what the 'local' means). We can't use it to implement cmpxchg64() in SMP configurations. So, for 32-bit SMP configurations: - Don't define cmpxchg64() - Don't enable HAVE_VIRT_CPU_ACCOUNTING_GEN, which requires it Fixes: e2093c7b03c1 ("MIPS: Fall back to generic implementation of ...") Fixes: bb877e96bea1 ("MIPS: Add support for full dynticks CPU time accounting") Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Deng-Cheng Zhu <dengcheng.zhu@mips.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.1+ Patchwork: https://patchwork.linux-mips.org/patch/17413/ Signed-off-by: James Hogan <jhogan@kernel.org>
2017-10-09MIPS: Fix cmpxchg on 32b signed ints for 64b kernel with !kernel_uses_llscPaul Burton
Commit 8263db4d7768 ("MIPS: cmpxchg: Implement __cmpxchg() as a function") refactored our implementation of __cmpxchg() to be a function rather than a macro, with the aim of making it easier to read & modify. Unfortunately the commit breaks use of cmpxchg() for signed 32 bit values when we have a 64 bit kernel with kernel_uses_llsc == false, because: - In cmpxchg_local() we cast the old value to the type the pointer points to, and then to an unsigned long. If the pointer points to a signed type smaller than 64 bits then the old value will be sign extended to 64 bits. That is, bits beyond the size of the pointed to type will be set to 1 if the old value is negative. In the case of a signed 32 bit integer with a negative value, bits 63:32 will all be set. - In __cmpxchg_asm() we load the value from memory, ie. dereference the pointer, and store the value as an unsigned integer (__ret) whose size matches the pointer. For a 32 bit cmpxchg() this means we store the value in a u32, because the pointer provided to __cmpxchg_asm() by __cmpxchg() is of type volatile u32 *. - __cmpxchg_asm() then checks whether the value in memory (__ret) matches the provided old value, by comparing the two values. This results in the u32 being promoted to a 64 bit unsigned long to match the old argument - however because both types are unsigned the value is zero extended, which does not match the sign extension performed on the old value in cmpxchg_local() earlier. This mismatch means that unfortunate cmpxchg() calls can incorrectly fail for 64 bit kernels with kernel_uses_llsc == false. This is the case on at least non-SMP Cavium Octeon kernels, which hardcode kernel_uses_llsc in their cpu-feature-overrides.h header. Using a v4.13-rc7 kernel configured using cavium_octeon_defconfig with SMP manually disabled, this presents itself as oddity when we reach userland - for example: can't run '/bin/mount': Text file busy can't run '/bin/mkdir': Text file busy can't run '/bin/mkdir': Text file busy can't run '/bin/mount': Text file busy can't run '/bin/hostname': Text file busy can't run '/etc/init.d/rcS': Text file busy can't run '/sbin/getty': Text file busy can't run '/sbin/getty': Text file busy It appears that some part of the init process, which is in this case buildroot's busybox init, is running successfully. It never manages to reach the login prompt though, and complains about /sbin/getty being busy repeatedly and indefinitely. Fix this by casting the old value provided to __cmpxchg_asm() to an appropriately sized unsigned integer, such that we consistently zero-extend avoiding the mismatch. The __cmpxchg_small() case for 8 & 16 bit values is unaffected because __cmpxchg_small() already masks provided values appropriately. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 8263db4d7768 ("MIPS: cmpxchg: Implement __cmpxchg() as a function") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17226/ Cc: linux-mips@linux-mips.org Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29MIPS: cmpxchg: Rearrange __xchg() arguments to match xchg()Paul Burton
The __xchg() function declares its first 2 arguments in reverse order compared to the xchg() macro, which is confusing & serves no purpose. Reorder the arguments such that __xchg() & xchg() match. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16356/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29MIPS: cmpxchg: Implement 1 byte & 2 byte cmpxchg()Paul Burton
Implement support for 1 & 2 byte cmpxchg() using read-modify-write atop a 4 byte cmpxchg(). This allows us to support these atomic operations despite the MIPS ISA only providing 4 & 8 byte atomic operations. This is required in order to support queued rwlocks (qrwlock) in a later patch, since these make use of a 1 byte cmpxchg() in their slow path. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16355/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29MIPS: cmpxchg: Implement 1 byte & 2 byte xchg()Paul Burton
Implement 1 & 2 byte xchg() using read-modify-write atop a 4 byte cmpxchg(). This allows us to support these atomic operations despite the MIPS ISA only providing for 4 & 8 byte atomic operations. This is required in order to support queued spinlocks (qspinlock) in a later patch, since these make use of a 2 byte xchg() in their slow path. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16354/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29MIPS: cmpxchg: Implement __cmpxchg() as a functionPaul Burton
Replace the macro definition of __cmpxchg() with an inline function, which is easier to read & modify. The cmpxchg() & cmpxchg_local() macros are adjusted to call the new __cmpxchg() function. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16353/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29MIPS: cmpxchg: Drop __xchg_u{32,64} functionsPaul Burton
The __xchg_u32() & __xchg_u64() functions now add very little value. This patch therefore removes them, by: - Moving memory barriers out of them & into xchg(), which also removes the duplication & readies us to support xchg_relaxed() if we wish to. - Calling __xchg_asm() directly from __xchg(). - Performing the check for CONFIG_64BIT being enabled in the size=8 case of __xchg(). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16352/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29MIPS: cmpxchg: Error out on unsupported xchg() callsPaul Burton
xchg() has up until now simply returned the x parameter in cases where it is called with a pointer to a value of an unsupported size. This will often cause the calling code to hit a failure path, presuming that the value of x differs from the content of the memory pointed at by ptr, but we can do better by producing a compile-time or link-time error such that unsupported calls to xchg() are detectable earlier than runtime. This patch does this in the same was as is already done for cmpxchg(), using a call to a missing function annotated with __compiletime_error(). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16351/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29MIPS: cmpxchg: Use __compiletime_error() for bad cmpxchg() pointersPaul Burton
Our cmpxchg() implementation relies upon generating a call to a function which doesn't really exist (__cmpxchg_called_with_bad_pointer) to create a link failure in cases where cmpxchg() is called with a pointer to a value of an unsupported size. The __compiletime_error macro can be used to decorate a function such that a call to it generates a compile-time, rather than a link-time, error. This patch uses __compiletime_error to cause bad cmpxchg() calls to error out at compile time rather than link time, allowing errors to occur more quickly & making it easier to spot where the problem comes from. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16350/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29MIPS: cmpxchg: Pull xchg() asm into a macroPaul Burton
Use a macro to generate the 32 & 64 bit variants of the backing code for xchg(), much as is already done for cmpxchg(). This removes the duplication that could previously be found in __xchg_u32() & __xchg_u64(). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16349/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29MIPS: cmpxchg: Unify R10000_LLSC_WAR & non-R10000_LLSC_WAR casesPaul Burton
Prior to this patch the xchg & cmpxchg functions have duplicated code which is for all intents & purposes identical apart from use of a branch-likely instruction in the R10000_LLSC_WAR case & a regular branch instruction in the non-R10000_LLSC_WAR case. This patch removes the duplication, declaring a __scbeqz macro to select the branch instruction suitable for use when checking the result of an sc instruction & making use of it to unify the 2 cases. In __xchg_u{32,64}() this means writing the branch in asm, where it was previously being done in C as a do...while loop for the non-R10000_LLSC_WAR case. As this is a single instruction, and adds consistency with the R10000_LLSC_WAR cases & the cmpxchg() code, this seems worthwhile. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16348/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-05-13arch: Remove __ARCH_HAVE_CMPXCHGThomas Gleixner
We removed the only user of this define in the rtmutex code. Get rid of it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2015-04-01MIPS: Fall back to generic implementation of cmpxchg64 on 32-bit platformsDeng-Cheng Zhu
This is in preparation of adding HAVE_VIRT_CPU_ACCOUNTING_GEN support in the next patch. Without having cmpxchg64 to use the generic implementation, kernel linking will complain: kernel/built-in.o: In function `cputime_adjust': cputime.c:(.text+0x33748): undefined reference to `__cmpxchg_called_with_bad_pointer' cputime.c:(.text+0x33810): undefined reference to `__cmpxchg_called_with_bad_pointer' Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@imgtec.com> Cc: linux-mips@linux-mips.org Cc: macro@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/9474/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-02-17MIPS: asm: cmpxchg: Update ISA constraints for MIPS R6 supportMarkos Chandras
MIPS R6 changed the opcodes for LL/SC instructions so we need to set the correct ISA. Cc: Matthew Fortune <Matthew.Fortune@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2015-02-17MIPS: asm: Rename GCC_OFF12_ASM to GCC_OFF_SMALL_ASMMarkos Chandras
The GCC_OFF12_ASM macro is used for 12-bit immediate constrains but we will also use it for 9-bit constrains on MIPS R6 so we rename it to something more appropriate. Cc: Maciej W. Rozycki <macro@linux-mips.org> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-11-24MIPS: Fix microMIPS LL/SC immediate offsetsMaciej W. Rozycki
In the microMIPS encoding some memory access instructions have their immediate offset reduced to 12 bits only. That does not match the GCC `R' constraint we use in some places to satisfy the requirement, resulting in build failures like this: {standard input}: Assembler messages: {standard input}:720: Error: macro used $at after ".set noat" {standard input}:720: Warning: macro instruction expanded into multiple instructions Fix the problem by defining a macro, `GCC_OFF12_ASM', that expands to the right constraint depending on whether microMIPS or standard MIPS code is produced. Also apply the fix to where `m' is used as in the worst case this change does nothing, e.g. where the pointer was already in a register such as a function argument and no further offset was requested, and in the best case it avoids an extraneous sequence of up to two instructions to load the high 20 bits of the address in the LL/SC loop. This reduces the risk of lock contention that is the higher the more instructions there are in the critical section between LL and SC. Strictly speaking we could just bulk-replace `R' with `ZC' as the latter constraint adjusts automatically depending on the ISA selected. However it was only introduced with GCC 4.9 and we keep supporing older compilers for the standard MIPS configuration, hence the slightly more complicated approach I chose. The choice of a zero-argument function-like rather than an object-like macro was made so that it does not look like a function call taking the C expression used for the constraint as an argument. This is so as not to confuse the reader or formatting checkers like `checkpatch.pl' and follows previous practice. Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com> Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/8482/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-31MIPS: Fix gigaton of warning building with microMIPS.Ralf Baechle
With binutils 2.24 the attempt to switch with microMIPS mode to MIPS III mode through .set mips3 results in *lots* of warnings like {standard input}: Assembler messages: {standard input}:397: Warning: the 64-bit MIPS architecture does not support the `smartmips' extension during a kernel build. Fixed by using .set arch=r4000 instead. This breaks support for building the kernel with binutils 2.13 which was supported for 32 bit kernels only anyway and 2.14 which was a bad vintage for MIPS anyway. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2013-02-01MIPS: Whitespace cleanup.Ralf Baechle
Having received another series of whitespace patches I decided to do this once and for all rather than dealing with this kind of patches trickling in forever. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2012-07-19MIPS: cmpxchg.h: Add missing includeAaro Koskinen
Fix the following build breakage in v3.4-rc1: CC kernel/irq_work.o In file included from include/linux/irq_work.h:4:0, from kernel/irq_work.c:10: include/linux/llist.h: In function 'llist_del_all': include/linux/llist.h:178:2: error: implicit declaration of function 'BUILD_BUG_ON' [-Werror=implicit-function-declaration] Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/3568/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2012-03-28Disintegrate asm/system.h for MIPSDavid Howells
Disintegrate asm/system.h for MIPS. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> cc: linux-mips@linux-mips.org
2010-10-29MIPS: Get rid of branches to .subsections.Ralf Baechle
It was a nice optimization - on paper at least. In practice it results in branches that may exceed the maximum legal range for a branch. We can fight that problem with -ffunction-sections but -ffunction-sections again is incompatible with -pg used by the function tracer. By rewriting the loop around all simple LL/SC blocks to C we reduce the amount of inline assembler and at the same time allow GCC to often fill the branch delay slots with something sensible or whatever else clever optimization it may have up in its sleeve. With this optimization gone we also no longer need -ffunction-sections, so drop it. This optimization was originally introduced in 2.6.21, commit 5999eca25c1fd4b9b9aca7833b04d10fe4bc877d (linux-mips.org) rsp. f65e4fa8e0c6022ad58dc88d1b11b12589ed7f9f (kernel.org). Original fix for the issues which caused me to pull this optimization by Paul Gortmaker <paul.gortmaker@windriver.com>. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-04-30MIPS: cmpxchg.h: Fix excessive indentation.Ralf Baechle
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-02-27MIPS: New macro smp_mb__before_llsc.David Daney
Replace some instances of smp_llsc_mb() with a new macro smp_mb__before_llsc(). It is used before ll/sc sequences that are documented as needing write barrier semantics. The default implementation of smp_mb__before_llsc() is just smp_llsc_mb(), so there are no changes in semantics. Also simplify definition of smp_mb(), smp_rmb(), and smp_wmb() to be just barrier() in the non-SMP case. Signed-off-by: David Daney <ddaney@caviumnetworks.com> To: linux-mips@linux-mips.org Patchwork: http://patchwork.linux-mips.org/patch/851/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-09-17MIPS: Allow kernel use of LL/SC to be separate from the presence of LL/SC.David Daney
On some CPUs, it is more efficient to disable and enable interrupts in the kernel rather than use ll/sc for atomic operations. But if we were to set cpu_has_llsc to false, we would break the userspace futex interface (in asm/futex.h). We separate the two concepts, with a new predicate kernel_uses_llsc, that lets us disable the kernel's use of ll/sc while still allowing the futex code to use it. Also there were a couple of cases in bitops.h where we were using ll/sc unconditionally even if cpu_has_llsc were false. Signed-off-by: David Daney <ddaney@caviumnetworks.com> Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2008-10-11MIPS: Move headfiles to new location below arch/mips/includeRalf Baechle
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>