summaryrefslogtreecommitdiff
path: root/kernel/locking/qspinlock_paravirt.h
diff options
context:
space:
mode:
authorWaiman Long <longman@redhat.com>2018-04-26 11:34:27 +0100
committerIngo Molnar <mingo@kernel.org>2018-04-27 09:48:53 +0200
commit81d3dc9a349b1e61d77106bbb05a6e6dd29b9d5e (patch)
tree448afcc709a1cdbce9b0af2df4f23418b04b5a76 /kernel/locking/qspinlock_paravirt.h
parentae75d9089ff7095d1d1a12c3cd86b21d3eaf3b15 (diff)
locking/qspinlock: Add stat tracking for pending vs. slowpath
Currently, the qspinlock_stat code tracks only statistical counts in the PV qspinlock code. However, it may also be useful to track the number of locking operations done via the pending code vs. the MCS lock queue slowpath for the non-PV case. The qspinlock stat code is modified to do that. The stat counter pv_lock_slowpath is renamed to lock_slowpath so that it can be used by both the PV and non-PV cases. Signed-off-by: Waiman Long <longman@redhat.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: boqun.feng@gmail.com Cc: linux-arm-kernel@lists.infradead.org Cc: paulmck@linux.vnet.ibm.com Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/1524738868-31318-14-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/locking/qspinlock_paravirt.h')
-rw-r--r--kernel/locking/qspinlock_paravirt.h7
1 files changed, 1 insertions, 6 deletions
diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index 2dbad2f25480..25730b2ac022 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -56,11 +56,6 @@ struct pv_node {
};
/*
- * Include queued spinlock statistics code
- */
-#include "qspinlock_stat.h"
-
-/*
* Hybrid PV queued/unfair lock
*
* By replacing the regular queued_spin_trylock() with the function below,
@@ -428,7 +423,7 @@ pv_wait_head_or_lock(struct qspinlock *lock, struct mcs_spinlock *node)
/*
* Tracking # of slowpath locking operations
*/
- qstat_inc(qstat_pv_lock_slowpath, true);
+ qstat_inc(qstat_lock_slowpath, true);
for (;; waitcnt++) {
/*