summaryrefslogtreecommitdiff
path: root/tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus
diff options
context:
space:
mode:
authorBoqun Feng <boqun.feng@gmail.com>2021-10-25 22:54:16 +0800
committerPaul E. McKenney <paulmck@kernel.org>2021-11-30 17:47:08 -0800
commitc438b7d860b4c1acb4ebff6d8d946d593ca5fe1e (patch)
tree5e86d66c4635958dbc2b9ee277e6234a9f055fc1 /tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus
parentb47c05ecf60bd8743ad8c0ee510d3e1c060529d7 (diff)
tools/memory-model: litmus: Add two tests for unlock(A)+lock(B) ordering
The memory model has been updated to provide a stronger ordering guarantee for unlock(A)+lock(B) on the same CPU/thread. Therefore add two litmus tests describing this new guarantee, these tests are simple yet can clearly show the usage of the new guarantee, also they can serve as the self tests for the modification in the model. Co-developed-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Diffstat (limited to 'tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus')
-rw-r--r--tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus35
1 files changed, 35 insertions, 0 deletions
diff --git a/tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus b/tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus
new file mode 100644
index 000000000000..eb34123a6ffe
--- /dev/null
+++ b/tools/memory-model/litmus-tests/LB+unlocklockonceonce+poacquireonce.litmus
@@ -0,0 +1,35 @@
+C LB+unlocklockonceonce+poacquireonce
+
+(*
+ * Result: Never
+ *
+ * If two locked critical sections execute on the same CPU, all accesses
+ * in the first must execute before any accesses in the second, even if the
+ * critical sections are protected by different locks. Note: Even when a
+ * write executes before a read, their memory effects can be reordered from
+ * the viewpoint of another CPU (the kind of reordering allowed by TSO).
+ *)
+
+{}
+
+P0(spinlock_t *s, spinlock_t *t, int *x, int *y)
+{
+ int r1;
+
+ spin_lock(s);
+ r1 = READ_ONCE(*x);
+ spin_unlock(s);
+ spin_lock(t);
+ WRITE_ONCE(*y, 1);
+ spin_unlock(t);
+}
+
+P1(int *x, int *y)
+{
+ int r2;
+
+ r2 = smp_load_acquire(y);
+ WRITE_ONCE(*x, 1);
+}
+
+exists (0:r1=1 /\ 1:r2=1)