diff options
author | Nicholas Piggin <npiggin@gmail.com> | 2022-11-26 19:59:17 +1000 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2022-12-02 17:48:49 +1100 |
commit | 4c93c2e4b9e8988511c06b9c042f23d4b8f593ad (patch) | |
tree | 7e8f54d808a17fd46eb79d5f43d228b6af175af0 /arch/powerpc/include/asm/qspinlock.h | |
parent | 84990b169557428c318df87b7836cd15f65b62dc (diff) |
powerpc/qspinlock: use a half-word store to unlock to avoid larx/stcx.
The first 16 bits of the lock are only modified by the owner, and other
modifications always use atomic operations on the entire 32 bits, so
unlocks can use plain stores on the 16 bits. This is the same kind of
optimisation done by core qspinlock code.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20221126095932.1234527-3-npiggin@gmail.com
Diffstat (limited to 'arch/powerpc/include/asm/qspinlock.h')
-rw-r--r-- | arch/powerpc/include/asm/qspinlock.h | 6 |
1 files changed, 1 insertions, 5 deletions
diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h index 6946dba5d087..713f6629f6fb 100644 --- a/arch/powerpc/include/asm/qspinlock.h +++ b/arch/powerpc/include/asm/qspinlock.h @@ -37,11 +37,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock) static inline void queued_spin_unlock(struct qspinlock *lock) { - for (;;) { - int val = atomic_read(&lock->val); - if (atomic_cmpxchg_release(&lock->val, val, val & ~_Q_LOCKED_VAL) == val) - return; - } + smp_store_release(&lock->locked, 0); } #define arch_spin_is_locked(l) queued_spin_is_locked(l) |