summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/spinlock.h
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2015-07-28 14:48:00 +0100
committerWill Deacon <will.deacon@arm.com>2015-07-28 14:48:00 +0100
commitc1d7cd228b4b46eca1dbd9bb2c6053f477a1a6ff (patch)
tree9a1fc2624b65b7a1a44a167d5b2e856cf7e1b1b6 /arch/arm64/include/asm/spinlock.h
parent4150e50bf5f2171fbe7dfdbc7f2cdf44676b79a4 (diff)
arm64: spinlock: fix ll/sc unlock on big-endian systems
When unlocking a spinlock, we perform a read-modify-write on the owner ticket in order to increment it and store it back with release semantics. In the LL/SC case, we load the 16-bit ticket using a 32-bit load and therefore store back the wrong halfword on a big-endian system, corrupting the lock after the first unlock and killing the system dead. This patch fixes the unlock code to use 16-bit accessors consistently. Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'arch/arm64/include/asm/spinlock.h')
-rw-r--r--arch/arm64/include/asm/spinlock.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index 87ae7efa1211..c85e96d174a5 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -110,7 +110,7 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
asm volatile(ARM64_LSE_ATOMIC_INSN(
/* LL/SC */
- " ldr %w1, %0\n"
+ " ldrh %w1, %0\n"
" add %w1, %w1, #1\n"
" stlrh %w1, %0",
/* LSE atomics */