summaryrefslogtreecommitdiff
path: root/Documentation/core-api
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/core-api')
-rw-r--r--Documentation/core-api/genalloc.rst28
-rw-r--r--Documentation/core-api/genericirq.rst52
-rw-r--r--Documentation/core-api/memory-allocation.rst50
-rw-r--r--Documentation/core-api/mm-api.rst2
-rw-r--r--Documentation/core-api/printk-formats.rst60
-rw-r--r--Documentation/core-api/refcount-vs-atomic.rst36
-rw-r--r--Documentation/core-api/symbol-namespaces.rst3
7 files changed, 137 insertions, 94 deletions
diff --git a/Documentation/core-api/genalloc.rst b/Documentation/core-api/genalloc.rst
index 6b38a39fab24..a5af2cbf58a5 100644
--- a/Documentation/core-api/genalloc.rst
+++ b/Documentation/core-api/genalloc.rst
@@ -23,7 +23,7 @@ begins with the creation of a pool using one of:
.. kernel-doc:: lib/genalloc.c
:functions: devm_gen_pool_create
-A call to :c:func:`gen_pool_create` will create a pool. The granularity of
+A call to gen_pool_create() will create a pool. The granularity of
allocations is set with min_alloc_order; it is a log-base-2 number like
those used by the page allocator, but it refers to bytes rather than pages.
So, if min_alloc_order is passed as 3, then all allocations will be a
@@ -32,7 +32,7 @@ required to track the memory in the pool. The nid parameter specifies
which NUMA node should be used for the allocation of the housekeeping
structures; it can be -1 if the caller doesn't care.
-The "managed" interface :c:func:`devm_gen_pool_create` ties the pool to a
+The "managed" interface devm_gen_pool_create() ties the pool to a
specific device. Among other things, it will automatically clean up the
pool when the given device is destroyed.
@@ -53,32 +53,32 @@ to the pool. That can be done with one of:
:functions: gen_pool_add
.. kernel-doc:: lib/genalloc.c
- :functions: gen_pool_add_virt
+ :functions: gen_pool_add_owner
-A call to :c:func:`gen_pool_add` will place the size bytes of memory
+A call to gen_pool_add() will place the size bytes of memory
starting at addr (in the kernel's virtual address space) into the given
pool, once again using nid as the node ID for ancillary memory allocations.
-The :c:func:`gen_pool_add_virt` variant associates an explicit physical
+The gen_pool_add_virt() variant associates an explicit physical
address with the memory; this is only necessary if the pool will be used
for DMA allocations.
The functions for allocating memory from the pool (and putting it back)
are:
-.. kernel-doc:: lib/genalloc.c
+.. kernel-doc:: include/linux/genalloc.h
:functions: gen_pool_alloc
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_dma_alloc
.. kernel-doc:: lib/genalloc.c
- :functions: gen_pool_free
+ :functions: gen_pool_free_owner
-As one would expect, :c:func:`gen_pool_alloc` will allocate size< bytes
-from the given pool. The :c:func:`gen_pool_dma_alloc` variant allocates
+As one would expect, gen_pool_alloc() will allocate size< bytes
+from the given pool. The gen_pool_dma_alloc() variant allocates
memory for use with DMA operations, returning the associated physical
address in the space pointed to by dma. This will only work if the memory
-was added with :c:func:`gen_pool_add_virt`. Note that this function
+was added with gen_pool_add_virt(). Note that this function
departs from the usual genpool pattern of using unsigned long values to
represent kernel addresses; it returns a void * instead.
@@ -89,14 +89,14 @@ return. If that sort of control is needed, the following functions will be
of interest:
.. kernel-doc:: lib/genalloc.c
- :functions: gen_pool_alloc_algo
+ :functions: gen_pool_alloc_algo_owner
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_set_algo
-Allocations with :c:func:`gen_pool_alloc_algo` specify an algorithm to be
+Allocations with gen_pool_alloc_algo() specify an algorithm to be
used to choose the memory to be allocated; the default algorithm can be set
-with :c:func:`gen_pool_set_algo`. The data value is passed to the
+with gen_pool_set_algo(). The data value is passed to the
algorithm; most ignore it, but it is occasionally needed. One can,
naturally, write a special-purpose algorithm, but there is a fair set
already available:
@@ -129,7 +129,7 @@ writing of special-purpose memory allocators in the future.
:functions: gen_pool_for_each_chunk
.. kernel-doc:: lib/genalloc.c
- :functions: addr_in_gen_pool
+ :functions: gen_pool_has_addr
.. kernel-doc:: lib/genalloc.c
:functions: gen_pool_avail
diff --git a/Documentation/core-api/genericirq.rst b/Documentation/core-api/genericirq.rst
index 4da67b65cecf..8f06d885c310 100644
--- a/Documentation/core-api/genericirq.rst
+++ b/Documentation/core-api/genericirq.rst
@@ -26,7 +26,7 @@ Rationale
=========
The original implementation of interrupt handling in Linux uses the
-:c:func:`__do_IRQ` super-handler, which is able to deal with every type of
+__do_IRQ() super-handler, which is able to deal with every type of
interrupt logic.
Originally, Russell King identified different types of handlers to build
@@ -43,7 +43,7 @@ During the implementation we identified another type:
- Fast EOI type
-In the SMP world of the :c:func:`__do_IRQ` super-handler another type was
+In the SMP world of the __do_IRQ() super-handler another type was
identified:
- Per CPU type
@@ -83,7 +83,7 @@ IRQ-flow implementation for 'level type' interrupts and add a
(sub)architecture specific 'edge type' implementation.
To make the transition to the new model easier and prevent the breakage
-of existing implementations, the :c:func:`__do_IRQ` super-handler is still
+of existing implementations, the __do_IRQ() super-handler is still
available. This leads to a kind of duality for the time being. Over time
the new model should be used in more and more architectures, as it
enables smaller and cleaner IRQ subsystems. It's deprecated for three
@@ -116,7 +116,7 @@ status information and pointers to the interrupt flow method and the
interrupt chip structure which are assigned to this interrupt.
Whenever an interrupt triggers, the low-level architecture code calls
-into the generic interrupt code by calling :c:func:`desc->handle_irq`. This
+into the generic interrupt code by calling desc->handle_irq(). This
high-level IRQ handling function only uses desc->irq_data.chip
primitives referenced by the assigned chip descriptor structure.
@@ -125,27 +125,29 @@ High-level Driver API
The high-level Driver API consists of following functions:
-- :c:func:`request_irq`
+- request_irq()
-- :c:func:`free_irq`
+- request_threaded_irq()
-- :c:func:`disable_irq`
+- free_irq()
-- :c:func:`enable_irq`
+- disable_irq()
-- :c:func:`disable_irq_nosync` (SMP only)
+- enable_irq()
-- :c:func:`synchronize_irq` (SMP only)
+- disable_irq_nosync() (SMP only)
-- :c:func:`irq_set_irq_type`
+- synchronize_irq() (SMP only)
-- :c:func:`irq_set_irq_wake`
+- irq_set_irq_type()
-- :c:func:`irq_set_handler_data`
+- irq_set_irq_wake()
-- :c:func:`irq_set_chip`
+- irq_set_handler_data()
-- :c:func:`irq_set_chip_data`
+- irq_set_chip()
+
+- irq_set_chip_data()
See the autogenerated function documentation for details.
@@ -154,19 +156,19 @@ High-level IRQ flow handlers
The generic layer provides a set of pre-defined irq-flow methods:
-- :c:func:`handle_level_irq`
+- handle_level_irq()
-- :c:func:`handle_edge_irq`
+- handle_edge_irq()
-- :c:func:`handle_fasteoi_irq`
+- handle_fasteoi_irq()
-- :c:func:`handle_simple_irq`
+- handle_simple_irq()
-- :c:func:`handle_percpu_irq`
+- handle_percpu_irq()
-- :c:func:`handle_edge_eoi_irq`
+- handle_edge_eoi_irq()
-- :c:func:`handle_bad_irq`
+- handle_bad_irq()
The interrupt flow handlers (either pre-defined or architecture
specific) are assigned to specific interrupts by the architecture either
@@ -325,14 +327,14 @@ Delayed interrupt disable
This per interrupt selectable feature, which was introduced by Russell
King in the ARM interrupt implementation, does not mask an interrupt at
-the hardware level when :c:func:`disable_irq` is called. The interrupt is kept
+the hardware level when disable_irq() is called. The interrupt is kept
enabled and is masked in the flow handler when an interrupt event
happens. This prevents losing edge interrupts on hardware which does not
store an edge interrupt event while the interrupt is disabled at the
hardware level. When an interrupt arrives while the IRQ_DISABLED flag
is set, then the interrupt is masked at the hardware level and the
IRQ_PENDING bit is set. When the interrupt is re-enabled by
-:c:func:`enable_irq` the pending bit is checked and if it is set, the interrupt
+enable_irq() the pending bit is checked and if it is set, the interrupt
is resent either via hardware or by a software resend mechanism. (It's
necessary to enable CONFIG_HARDIRQS_SW_RESEND when you want to use
the delayed interrupt disable feature and your hardware is not capable
@@ -369,7 +371,7 @@ handler(s) to use these basic units of low-level functionality.
__do_IRQ entry point
====================
-The original implementation :c:func:`__do_IRQ` was an alternative entry point
+The original implementation __do_IRQ() was an alternative entry point
for all types of interrupts. It no longer exists.
This handler turned out to be not suitable for all interrupt hardware
diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
index 939e3dfc86e9..4aa82ddd01b8 100644
--- a/Documentation/core-api/memory-allocation.rst
+++ b/Documentation/core-api/memory-allocation.rst
@@ -88,10 +88,11 @@ Selecting memory allocator
==========================
The most straightforward way to allocate memory is to use a function
-from the :c:func:`kmalloc` family. And, to be on the safe size it's
-best to use routines that set memory to zero, like
-:c:func:`kzalloc`. If you need to allocate memory for an array, there
-are :c:func:`kmalloc_array` and :c:func:`kcalloc` helpers.
+from the kmalloc() family. And, to be on the safe side it's best to use
+routines that set memory to zero, like kzalloc(). If you need to
+allocate memory for an array, there are kmalloc_array() and kcalloc()
+helpers. The helpers struct_size(), array_size() and array3_size() can
+be used to safely calculate object sizes without overflowing.
The maximal size of a chunk that can be allocated with `kmalloc` is
limited. The actual limit depends on the hardware and the kernel
@@ -102,29 +103,26 @@ The address of a chunk allocated with `kmalloc` is aligned to at least
ARCH_KMALLOC_MINALIGN bytes. For sizes which are a power of two, the
alignment is also guaranteed to be at least the respective size.
-For large allocations you can use :c:func:`vmalloc` and
-:c:func:`vzalloc`, or directly request pages from the page
-allocator. The memory allocated by `vmalloc` and related functions is
-not physically contiguous.
+For large allocations you can use vmalloc() and vzalloc(), or directly
+request pages from the page allocator. The memory allocated by `vmalloc`
+and related functions is not physically contiguous.
If you are not sure whether the allocation size is too large for
-`kmalloc`, it is possible to use :c:func:`kvmalloc` and its
-derivatives. It will try to allocate memory with `kmalloc` and if the
-allocation fails it will be retried with `vmalloc`. There are
-restrictions on which GFP flags can be used with `kvmalloc`; please
-see :c:func:`kvmalloc_node` reference documentation. Note that
-`kvmalloc` may return memory that is not physically contiguous.
+`kmalloc`, it is possible to use kvmalloc() and its derivatives. It will
+try to allocate memory with `kmalloc` and if the allocation fails it
+will be retried with `vmalloc`. There are restrictions on which GFP
+flags can be used with `kvmalloc`; please see kvmalloc_node() reference
+documentation. Note that `kvmalloc` may return memory that is not
+physically contiguous.
If you need to allocate many identical objects you can use the slab
-cache allocator. The cache should be set up with
-:c:func:`kmem_cache_create` or :c:func:`kmem_cache_create_usercopy`
-before it can be used. The second function should be used if a part of
-the cache might be copied to the userspace. After the cache is
-created :c:func:`kmem_cache_alloc` and its convenience wrappers can
-allocate memory from that cache.
-
-When the allocated memory is no longer needed it must be freed. You
-can use :c:func:`kvfree` for the memory allocated with `kmalloc`,
-`vmalloc` and `kvmalloc`. The slab caches should be freed with
-:c:func:`kmem_cache_free`. And don't forget to destroy the cache with
-:c:func:`kmem_cache_destroy`.
+cache allocator. The cache should be set up with kmem_cache_create() or
+kmem_cache_create_usercopy() before it can be used. The second function
+should be used if a part of the cache might be copied to the userspace.
+After the cache is created kmem_cache_alloc() and its convenience
+wrappers can allocate memory from that cache.
+
+When the allocated memory is no longer needed it must be freed. You can
+use kvfree() for the memory allocated with `kmalloc`, `vmalloc` and
+`kvmalloc`. The slab caches should be freed with kmem_cache_free(). And
+don't forget to destroy the cache with kmem_cache_destroy().
diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst
index 128e8a721c1e..be726986ff75 100644
--- a/Documentation/core-api/mm-api.rst
+++ b/Documentation/core-api/mm-api.rst
@@ -11,7 +11,7 @@ User Space Memory Access
.. kernel-doc:: arch/x86/lib/usercopy_32.c
:export:
-.. kernel-doc:: mm/util.c
+.. kernel-doc:: mm/gup.c
:functions: get_user_pages_fast
.. _mm-api-gfp-flags:
diff --git a/Documentation/core-api/printk-formats.rst b/Documentation/core-api/printk-formats.rst
index ecbebf4ca8e7..8ebe46b1af39 100644
--- a/Documentation/core-api/printk-formats.rst
+++ b/Documentation/core-api/printk-formats.rst
@@ -79,6 +79,18 @@ has the added benefit of providing a unique identifier. On 64-bit machines
the first 32 bits are zeroed. The kernel will print ``(ptrval)`` until it
gathers enough entropy. If you *really* want the address see %px below.
+Error Pointers
+--------------
+
+::
+
+ %pe -ENOSPC
+
+For printing error pointers (i.e. a pointer for which IS_ERR() is true)
+as a symbolic error name. Error values for which no symbolic name is
+known are printed in decimal, while a non-ERR_PTR passed as the
+argument to %pe gets treated as ordinary %p.
+
Symbols/Function Pointers
-------------------------
@@ -86,8 +98,6 @@ Symbols/Function Pointers
%pS versatile_init+0x0/0x110
%ps versatile_init
- %pF versatile_init+0x0/0x110
- %pf versatile_init
%pSR versatile_init+0x9/0x110
(with __builtin_extract_return_addr() translation)
%pB prev_fn_of_versatile_init+0x88/0x88
@@ -97,14 +107,6 @@ The ``S`` and ``s`` specifiers are used for printing a pointer in symbolic
format. They result in the symbol name with (S) or without (s)
offsets. If KALLSYMS are disabled then the symbol address is printed instead.
-Note, that the ``F`` and ``f`` specifiers are identical to ``S`` (``s``)
-and thus deprecated. We have ``F`` and ``f`` because on ia64, ppc64 and
-parisc64 function pointers are indirect and, in fact, are function
-descriptors, which require additional dereferencing before we can lookup
-the symbol. As of now, ``S`` and ``s`` perform dereferencing on those
-platforms (when needed), so ``F`` and ``f`` exist for compatibility
-reasons only.
-
The ``B`` specifier results in the symbol name with offsets and should be
used when printing stack backtraces. The specifier takes into
consideration the effect of compiler optimisations which may occur
@@ -135,6 +137,20 @@ equivalent to %lx (or %lu). %px is preferred because it is more uniquely
grep'able. If in the future we need to modify the way the kernel handles
printing pointers we will be better equipped to find the call sites.
+Pointer Differences
+-------------------
+
+::
+
+ %td 2560
+ %tx a00
+
+For printing the pointer differences, use the %t modifier for ptrdiff_t.
+
+Example::
+
+ printk("test: difference between pointers: %td\n", ptr2 - ptr1);
+
Struct Resources
----------------
@@ -428,6 +444,30 @@ Examples::
Passed by reference.
+Fwnode handles
+--------------
+
+::
+
+ %pfw[fP]
+
+For printing information on fwnode handles. The default is to print the full
+node name, including the path. The modifiers are functionally equivalent to
+%pOF above.
+
+ - f - full name of the node, including the path
+ - P - the name of the node including an address (if there is one)
+
+Examples (ACPI)::
+
+ %pfwf \_SB.PCI0.CIO2.port@1.endpoint@0 - Full node name
+ %pfwP endpoint@0 - Node name
+
+Examples (OF)::
+
+ %pfwf /ocp@68000000/i2c@48072000/camera@10/port/endpoint - Full name
+ %pfwP endpoint - Node name
+
Time and date (struct rtc_time)
-------------------------------
diff --git a/Documentation/core-api/refcount-vs-atomic.rst b/Documentation/core-api/refcount-vs-atomic.rst
index 976e85adffe8..79a009ce11df 100644
--- a/Documentation/core-api/refcount-vs-atomic.rst
+++ b/Documentation/core-api/refcount-vs-atomic.rst
@@ -35,7 +35,7 @@ atomics & refcounters only provide atomicity and
program order (po) relation (on the same CPU). It guarantees that
each ``atomic_*()`` and ``refcount_*()`` operation is atomic and instructions
are executed in program order on a single CPU.
-This is implemented using :c:func:`READ_ONCE`/:c:func:`WRITE_ONCE` and
+This is implemented using READ_ONCE()/WRITE_ONCE() and
compare-and-swap primitives.
A strong (full) memory ordering guarantees that all prior loads and
@@ -44,7 +44,7 @@ before any po-later instruction is executed on the same CPU.
It also guarantees that all po-earlier stores on the same CPU
and all propagated stores from other CPUs must propagate to all
other CPUs before any po-later instruction is executed on the original
-CPU (A-cumulative property). This is implemented using :c:func:`smp_mb`.
+CPU (A-cumulative property). This is implemented using smp_mb().
A RELEASE memory ordering guarantees that all prior loads and
stores (all po-earlier instructions) on the same CPU are completed
@@ -52,14 +52,14 @@ before the operation. It also guarantees that all po-earlier
stores on the same CPU and all propagated stores from other CPUs
must propagate to all other CPUs before the release operation
(A-cumulative property). This is implemented using
-:c:func:`smp_store_release`.
+smp_store_release().
An ACQUIRE memory ordering guarantees that all post loads and
stores (all po-later instructions) on the same CPU are
completed after the acquire operation. It also guarantees that all
po-later stores on the same CPU must propagate to all other CPUs
after the acquire operation executes. This is implemented using
-:c:func:`smp_acquire__after_ctrl_dep`.
+smp_acquire__after_ctrl_dep().
A control dependency (on success) for refcounters guarantees that
if a reference for an object was successfully obtained (reference
@@ -78,8 +78,8 @@ case 1) - non-"Read/Modify/Write" (RMW) ops
Function changes:
- * :c:func:`atomic_set` --> :c:func:`refcount_set`
- * :c:func:`atomic_read` --> :c:func:`refcount_read`
+ * atomic_set() --> refcount_set()
+ * atomic_read() --> refcount_read()
Memory ordering guarantee changes:
@@ -91,8 +91,8 @@ case 2) - increment-based ops that return no value
Function changes:
- * :c:func:`atomic_inc` --> :c:func:`refcount_inc`
- * :c:func:`atomic_add` --> :c:func:`refcount_add`
+ * atomic_inc() --> refcount_inc()
+ * atomic_add() --> refcount_add()
Memory ordering guarantee changes:
@@ -103,7 +103,7 @@ case 3) - decrement-based RMW ops that return no value
Function changes:
- * :c:func:`atomic_dec` --> :c:func:`refcount_dec`
+ * atomic_dec() --> refcount_dec()
Memory ordering guarantee changes:
@@ -115,8 +115,8 @@ case 4) - increment-based RMW ops that return a value
Function changes:
- * :c:func:`atomic_inc_not_zero` --> :c:func:`refcount_inc_not_zero`
- * no atomic counterpart --> :c:func:`refcount_add_not_zero`
+ * atomic_inc_not_zero() --> refcount_inc_not_zero()
+ * no atomic counterpart --> refcount_add_not_zero()
Memory ordering guarantees changes:
@@ -131,8 +131,8 @@ case 5) - generic dec/sub decrement-based RMW ops that return a value
Function changes:
- * :c:func:`atomic_dec_and_test` --> :c:func:`refcount_dec_and_test`
- * :c:func:`atomic_sub_and_test` --> :c:func:`refcount_sub_and_test`
+ * atomic_dec_and_test() --> refcount_dec_and_test()
+ * atomic_sub_and_test() --> refcount_sub_and_test()
Memory ordering guarantees changes:
@@ -144,14 +144,14 @@ case 6) other decrement-based RMW ops that return a value
Function changes:
- * no atomic counterpart --> :c:func:`refcount_dec_if_one`
+ * no atomic counterpart --> refcount_dec_if_one()
* ``atomic_add_unless(&var, -1, 1)`` --> ``refcount_dec_not_one(&var)``
Memory ordering guarantees changes:
* fully ordered --> RELEASE ordering + control dependency
-.. note:: :c:func:`atomic_add_unless` only provides full order on success.
+.. note:: atomic_add_unless() only provides full order on success.
case 7) - lock-based RMW
@@ -159,10 +159,10 @@ case 7) - lock-based RMW
Function changes:
- * :c:func:`atomic_dec_and_lock` --> :c:func:`refcount_dec_and_lock`
- * :c:func:`atomic_dec_and_mutex_lock` --> :c:func:`refcount_dec_and_mutex_lock`
+ * atomic_dec_and_lock() --> refcount_dec_and_lock()
+ * atomic_dec_and_mutex_lock() --> refcount_dec_and_mutex_lock()
Memory ordering guarantees changes:
* fully ordered --> RELEASE ordering + control dependency + hold
- :c:func:`spin_lock` on success
+ spin_lock() on success
diff --git a/Documentation/core-api/symbol-namespaces.rst b/Documentation/core-api/symbol-namespaces.rst
index 982ed7b568ac..9b76337f6756 100644
--- a/Documentation/core-api/symbol-namespaces.rst
+++ b/Documentation/core-api/symbol-namespaces.rst
@@ -152,3 +152,6 @@ in-tree modules::
- notice the warning of modpost telling about a missing import
- run `make nsdeps` to add the import to the correct code location
+You can also run nsdeps for external module builds. A typical usage is::
+
+ $ make -C <path_to_kernel_src> M=$PWD nsdeps