summaryrefslogtreecommitdiff
path: root/arch/blackfin
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2017-10-03 19:25:27 +0100
committerIngo Molnar <mingo@kernel.org>2017-10-10 11:50:18 +0200
commita8a217c22116eff6c120d753c9934089fb229af0 (patch)
treed56f1283ba7153f9ead9fb92c1a1794143c6e4a3 /arch/blackfin
parent26c4eb192c6224e5297496cead36404b62fb071b (diff)
locking/core: Remove {read,spin,write}_can_lock()
Outside of the locking code itself, {read,spin,write}_can_lock() have no users in tree. Apparmor (the last remaining user of write_can_lock()) got moved over to lockdep by the previous patch. This patch removes the use of {read,spin,write}_can_lock() from the BUILD_LOCK_OPS macro, deferring to the trylock operation for testing the lock status, and subsequently removes the unused macros altogether. They aren't guaranteed to work in a concurrent environment and can give incorrect results in the case of qrwlock. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: paulmck@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1507055129-12300-2-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/blackfin')
-rw-r--r--arch/blackfin/include/asm/spinlock.h10
1 files changed, 0 insertions, 10 deletions
diff --git a/arch/blackfin/include/asm/spinlock.h b/arch/blackfin/include/asm/spinlock.h
index f6431439d15d..607ef98c0f6c 100644
--- a/arch/blackfin/include/asm/spinlock.h
+++ b/arch/blackfin/include/asm/spinlock.h
@@ -48,16 +48,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
__raw_spin_unlock_asm(&lock->lock);
}
-static inline int arch_read_can_lock(arch_rwlock_t *rw)
-{
- return __raw_uncached_fetch_asm(&rw->lock) > 0;
-}
-
-static inline int arch_write_can_lock(arch_rwlock_t *rw)
-{
- return __raw_uncached_fetch_asm(&rw->lock) == RW_LOCK_BIAS;
-}
-
static inline void arch_read_lock(arch_rwlock_t *rw)
{
__raw_read_lock_asm(&rw->lock);