summaryrefslogtreecommitdiff
path: root/Documentation/kernel-hacking/locking.rst
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/kernel-hacking/locking.rst')
-rw-r--r--Documentation/kernel-hacking/locking.rst6
1 files changed, 4 insertions, 2 deletions
diff --git a/Documentation/kernel-hacking/locking.rst b/Documentation/kernel-hacking/locking.rst
index f937c0fd11aa..519673df0e82 100644
--- a/Documentation/kernel-hacking/locking.rst
+++ b/Documentation/kernel-hacking/locking.rst
@@ -1,3 +1,5 @@
+.. _kernel_hacking_lock:
+
===========================
Unreliable Guide To Locking
===========================
@@ -177,7 +179,7 @@ perfect world).
Note that you can also use :c:func:`spin_lock_irq()` or
:c:func:`spin_lock_irqsave()` here, which stop hardware interrupts
-as well: see `Hard IRQ Context <#hardirq-context>`__.
+as well: see `Hard IRQ Context <#hard-irq-context>`__.
This works perfectly for UP as well: the spin lock vanishes, and this
macro simply becomes :c:func:`local_bh_disable()`
@@ -228,7 +230,7 @@ The Same Softirq
~~~~~~~~~~~~~~~~
The same softirq can run on the other CPUs: you can use a per-CPU array
-(see `Per-CPU Data <#per-cpu>`__) for better performance. If you're
+(see `Per-CPU Data <#per-cpu-data>`__) for better performance. If you're
going so far as to use a softirq, you probably care about scalable
performance enough to justify the extra complexity.