summaryrefslogtreecommitdiff
path: root/Documentation/kernel-hacking/locking.rst
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/kernel-hacking/locking.rst')
-rw-r--r--Documentation/kernel-hacking/locking.rst28
1 files changed, 14 insertions, 14 deletions
diff --git a/Documentation/kernel-hacking/locking.rst b/Documentation/kernel-hacking/locking.rst
index e6cd40663ea5..dff0646a717b 100644
--- a/Documentation/kernel-hacking/locking.rst
+++ b/Documentation/kernel-hacking/locking.rst
@@ -295,7 +295,7 @@ Pete Zaitcev gives the following summary:
- If you are in a process context (any syscall) and want to lock other
process out, use a mutex. You can take a mutex and sleep
- (``copy_from_user*(`` or ``kmalloc(x,GFP_KERNEL)``).
+ (``copy_from_user()`` or ``kmalloc(x,GFP_KERNEL)``).
- Otherwise (== data can be touched in an interrupt), use
spin_lock_irqsave() and
@@ -941,8 +941,7 @@ lock.
A classic problem here is when you provide callbacks or hooks: if you
call these with the lock held, you risk simple deadlock, or a deadly
-embrace (who knows what the callback will do?). Remember, the other
-programmers are out to get you, so don't do this.
+embrace (who knows what the callback will do?).
Overzealous Prevention Of Deadlocks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -952,8 +951,6 @@ grabs a read lock, searches a list, fails to find what it wants, drops
the read lock, grabs a write lock and inserts the object has a race
condition.
-If you don't see why, please stay away from my code.
-
Racing Timers: A Kernel Pastime
-------------------------------
@@ -970,7 +967,7 @@ you might do the following::
while (list) {
struct foo *next = list->next;
- del_timer(&list->timer);
+ timer_delete(&list->timer);
kfree(list);
list = next;
}
@@ -984,7 +981,7 @@ the lock after we spin_unlock_bh(), and then try to free
the element (which has already been freed!).
This can be avoided by checking the result of
-del_timer(): if it returns 1, the timer has been deleted.
+timer_delete(): if it returns 1, the timer has been deleted.
If 0, it means (in this case) that it is currently running, so we can
do::
@@ -993,7 +990,7 @@ do::
while (list) {
struct foo *next = list->next;
- if (!del_timer(&list->timer)) {
+ if (!timer_delete(&list->timer)) {
/* Give timer a chance to delete this */
spin_unlock_bh(&list_lock);
goto retry;
@@ -1008,9 +1005,12 @@ do::
Another common problem is deleting timers which restart themselves (by
calling add_timer() at the end of their timer function).
Because this is a fairly common case which is prone to races, you should
-use del_timer_sync() (``include/linux/timer.h``) to
-handle this case. It returns the number of times the timer had to be
-deleted before we finally stopped it from adding itself back in.
+use timer_delete_sync() (``include/linux/timer.h``) to handle this case.
+
+Before freeing a timer, timer_shutdown() or timer_shutdown_sync() should be
+called which will keep it from being rearmed. Any subsequent attempt to
+rearm the timer will be silently ignored by the core code.
+
Locking Speed
=============
@@ -1277,11 +1277,11 @@ Manfred Spraul points out that you can still do this, even if the data
is very occasionally accessed in user context or softirqs/tasklets. The
irq handler doesn't use a lock, and all other accesses are done as so::
- spin_lock(&lock);
+ mutex_lock(&lock);
disable_irq(irq);
...
enable_irq(irq);
- spin_unlock(&lock);
+ mutex_unlock(&lock);
The disable_irq() prevents the irq handler from running
(and waits for it to finish if it's currently running on other CPUs).
@@ -1338,7 +1338,7 @@ lock.
- kfree()
-- add_timer() and del_timer()
+- add_timer() and timer_delete()
Mutex API reference
===================