summaryrefslogtreecommitdiff
path: root/Documentation/mm
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/mm')
-rw-r--r--Documentation/mm/allocation-profiling.rst104
-rw-r--r--Documentation/mm/arch_pgtable_helpers.rst10
-rw-r--r--Documentation/mm/balance.rst2
-rw-r--r--Documentation/mm/damon/design.rst426
-rw-r--r--Documentation/mm/damon/index.rst34
-rw-r--r--Documentation/mm/damon/maintainer-profile.rst99
-rw-r--r--Documentation/mm/damon/monitoring_intervals_tuning_example.rst247
-rw-r--r--Documentation/mm/hmm.rst12
-rw-r--r--Documentation/mm/index.rst19
-rw-r--r--Documentation/mm/page_migration.rst22
-rw-r--r--Documentation/mm/page_table_check.rst9
-rw-r--r--Documentation/mm/page_tables.rst2
-rw-r--r--Documentation/mm/physical_memory.rst268
-rw-r--r--Documentation/mm/process_addrs.rst862
-rw-r--r--Documentation/mm/slub.rst9
-rw-r--r--Documentation/mm/split_page_table_lock.rst12
-rw-r--r--Documentation/mm/transhuge.rst45
-rw-r--r--Documentation/mm/unevictable-lru.rst18
-rw-r--r--Documentation/mm/vmalloced-kernel-stacks.rst18
-rw-r--r--Documentation/mm/vmemmap_dedup.rst22
-rw-r--r--Documentation/mm/z3fold.rst28
-rw-r--r--Documentation/mm/zsmalloc.rst5
22 files changed, 2020 insertions, 253 deletions
diff --git a/Documentation/mm/allocation-profiling.rst b/Documentation/mm/allocation-profiling.rst
new file mode 100644
index 000000000000..316311240e6a
--- /dev/null
+++ b/Documentation/mm/allocation-profiling.rst
@@ -0,0 +1,104 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================
+MEMORY ALLOCATION PROFILING
+===========================
+
+Low overhead (suitable for production) accounting of all memory allocations,
+tracked by file and line number.
+
+Usage:
+kconfig options:
+- CONFIG_MEM_ALLOC_PROFILING
+
+- CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT
+
+- CONFIG_MEM_ALLOC_PROFILING_DEBUG
+ adds warnings for allocations that weren't accounted because of a
+ missing annotation
+
+Boot parameter:
+ sysctl.vm.mem_profiling={0|1|never}[,compressed]
+
+ When set to "never", memory allocation profiling overhead is minimized and it
+ cannot be enabled at runtime (sysctl becomes read-only).
+ When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=y, default value is "1".
+ When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=n, default value is "never".
+ "compressed" optional parameter will try to store page tag references in a
+ compact format, avoiding page extensions. This results in improved performance
+ and memory consumption, however it might fail depending on system configuration.
+ If compression fails, a warning is issued and memory allocation profiling gets
+ disabled.
+
+sysctl:
+ /proc/sys/vm/mem_profiling
+
+Runtime info:
+ /proc/allocinfo
+
+Example output::
+
+ root@moria-kvm:~# sort -g /proc/allocinfo|tail|numfmt --to=iec
+ 2.8M 22648 fs/kernfs/dir.c:615 func:__kernfs_new_node
+ 3.8M 953 mm/memory.c:4214 func:alloc_anon_folio
+ 4.0M 1010 drivers/staging/ctagmod/ctagmod.c:20 [ctagmod] func:ctagmod_start
+ 4.1M 4 net/netfilter/nf_conntrack_core.c:2567 func:nf_ct_alloc_hashtable
+ 6.0M 1532 mm/filemap.c:1919 func:__filemap_get_folio
+ 8.8M 2785 kernel/fork.c:307 func:alloc_thread_stack_node
+ 13M 234 block/blk-mq.c:3421 func:blk_mq_alloc_rqs
+ 14M 3520 mm/mm_init.c:2530 func:alloc_large_system_hash
+ 15M 3656 mm/readahead.c:247 func:page_cache_ra_unbounded
+ 55M 4887 mm/slub.c:2259 func:alloc_slab_page
+ 122M 31168 mm/page_ext.c:270 func:alloc_page_ext
+
+Theory of operation
+===================
+
+Memory allocation profiling builds off of code tagging, which is a library for
+declaring static structs (that typically describe a file and line number in
+some way, hence code tagging) and then finding and operating on them at runtime,
+- i.e. iterating over them to print them in debugfs/procfs.
+
+To add accounting for an allocation call, we replace it with a macro
+invocation, alloc_hooks(), that
+- declares a code tag
+- stashes a pointer to it in task_struct
+- calls the real allocation function
+- and finally, restores the task_struct alloc tag pointer to its previous value.
+
+This allows for alloc_hooks() calls to be nested, with the most recent one
+taking effect. This is important for allocations internal to the mm/ code that
+do not properly belong to the outer allocation context and should be counted
+separately: for example, slab object extension vectors, or when the slab
+allocates pages from the page allocator.
+
+Thus, proper usage requires determining which function in an allocation call
+stack should be tagged. There are many helper functions that essentially wrap
+e.g. kmalloc() and do a little more work, then are called in multiple places;
+we'll generally want the accounting to happen in the callers of these helpers,
+not in the helpers themselves.
+
+To fix up a given helper, for example foo(), do the following:
+- switch its allocation call to the _noprof() version, e.g. kmalloc_noprof()
+
+- rename it to foo_noprof()
+
+- define a macro version of foo() like so:
+
+ #define foo(...) alloc_hooks(foo_noprof(__VA_ARGS__))
+
+It's also possible to stash a pointer to an alloc tag in your own data structures.
+
+Do this when you're implementing a generic data structure that does allocations
+"on behalf of" some other code - for example, the rhashtable code. This way,
+instead of seeing a large line in /proc/allocinfo for rhashtable.c, we can
+break it out by rhashtable type.
+
+To do so:
+- Hook your data structure's init function, like any other allocation function.
+
+- Within your init function, use the convenience macro alloc_tag_record() to
+ record alloc tag in your data structure.
+
+- Then, use the following form for your allocations:
+ alloc_hooks_tag(ht->your_saved_tag, kmalloc_noprof(...))
diff --git a/Documentation/mm/arch_pgtable_helpers.rst b/Documentation/mm/arch_pgtable_helpers.rst
index 2466d3363af7..af245161d8e7 100644
--- a/Documentation/mm/arch_pgtable_helpers.rst
+++ b/Documentation/mm/arch_pgtable_helpers.rst
@@ -90,8 +90,6 @@ PMD Page Table Helpers
+---------------------------+--------------------------------------------------+
| pmd_leaf | Tests a leaf mapped PMD |
+---------------------------+--------------------------------------------------+
-| pmd_huge | Tests a HugeTLB mapped PMD |
-+---------------------------+--------------------------------------------------+
| pmd_trans_huge | Tests a Transparent Huge Page (THP) at PMD |
+---------------------------+--------------------------------------------------+
| pmd_present | Tests whether pmd_page() points to valid memory |
@@ -140,7 +138,8 @@ PMD Page Table Helpers
+---------------------------+--------------------------------------------------+
| pmd_swp_clear_soft_dirty | Clears a soft dirty swapped PMD |
+---------------------------+--------------------------------------------------+
-| pmd_mkinvalid | Invalidates a mapped PMD [1] |
+| pmd_mkinvalid | Invalidates a present PMD; do not call for |
+| | non-present PMD [1] |
+---------------------------+--------------------------------------------------+
| pmd_set_huge | Creates a PMD huge mapping |
+---------------------------+--------------------------------------------------+
@@ -168,8 +167,6 @@ PUD Page Table Helpers
+---------------------------+--------------------------------------------------+
| pud_leaf | Tests a leaf mapped PUD |
+---------------------------+--------------------------------------------------+
-| pud_huge | Tests a HugeTLB mapped PUD |
-+---------------------------+--------------------------------------------------+
| pud_trans_huge | Tests a Transparent Huge Page (THP) at PUD |
+---------------------------+--------------------------------------------------+
| pud_present | Tests a valid mapped PUD |
@@ -196,7 +193,8 @@ PUD Page Table Helpers
+---------------------------+--------------------------------------------------+
| pud_mkdevmap | Creates a ZONE_DEVICE mapped PUD |
+---------------------------+--------------------------------------------------+
-| pud_mkinvalid | Invalidates a mapped PUD [1] |
+| pud_mkinvalid | Invalidates a present PUD; do not call for |
+| | non-present PUD [1] |
+---------------------------+--------------------------------------------------+
| pud_set_huge | Creates a PUD huge mapping |
+---------------------------+--------------------------------------------------+
diff --git a/Documentation/mm/balance.rst b/Documentation/mm/balance.rst
index abaa78561c31..c4962c89a7f5 100644
--- a/Documentation/mm/balance.rst
+++ b/Documentation/mm/balance.rst
@@ -81,7 +81,7 @@ Page stealing from process memory and shm is done if stealing the page would
alleviate memory pressure on any zone in the page's node that has fallen below
its watermark.
-watemark[WMARK_MIN/WMARK_LOW/WMARK_HIGH]/low_on_memory/zone_wake_kswapd: These
+watermark[WMARK_MIN/WMARK_LOW/WMARK_HIGH]/low_on_memory/zone_wake_kswapd: These
are per-zone fields, used to determine when a zone needs to be balanced. When
the number of pages falls below watermark[WMARK_MIN], the hysteric field
low_on_memory gets set. This stays set till the number of free pages becomes
diff --git a/Documentation/mm/damon/design.rst b/Documentation/mm/damon/design.rst
index 5620aab9b385..ddc50db3afa4 100644
--- a/Documentation/mm/damon/design.rst
+++ b/Documentation/mm/damon/design.rst
@@ -16,53 +16,24 @@ called DAMON ``context``. DAMON executes each context with a kernel thread
called ``kdamond``. Multiple kdamonds could run in parallel, for different
types of monitoring.
+To know how user-space can do the configurations and start/stop DAMON, refer to
+:ref:`DAMON sysfs interface <sysfs_interface>` documentation.
+
Overall Architecture
====================
DAMON subsystem is configured with three layers including
-- Operations Set: Implements fundamental operations for DAMON that depends on
- the given monitoring target address-space and available set of
- software/hardware primitives,
-- Core: Implements core logics including monitoring overhead/accurach control
- and access-aware system operations on top of the operations set layer, and
-- Modules: Implements kernel modules for various purposes that provides
- interfaces for the user space, on top of the core layer.
-
-
-.. _damon_design_configurable_operations_set:
-
-Configurable Operations Set
----------------------------
-
-For data access monitoring and additional low level work, DAMON needs a set of
-implementations for specific operations that are dependent on and optimized for
-the given target address space. On the other hand, the accuracy and overhead
-tradeoff mechanism, which is the core logic of DAMON, is in the pure logic
-space. DAMON separates the two parts in different layers, namely DAMON
-Operations Set and DAMON Core Logics Layers, respectively. It further defines
-the interface between the layers to allow various operations sets to be
-configured with the core logic.
-
-Due to this design, users can extend DAMON for any address space by configuring
-the core logic to use the appropriate operations set. If any appropriate set
-is unavailable, users can implement one on their own.
-
-For example, physical memory, virtual memory, swap space, those for specific
-processes, NUMA nodes, files, and backing memory devices would be supportable.
-Also, if some architectures or devices supporting special optimized access
-check primitives, those will be easily configurable.
-
-
-Programmable Modules
---------------------
-
-Core layer of DAMON is implemented as a framework, and exposes its application
-programming interface to all kernel space components such as subsystems and
-modules. For common use cases of DAMON, DAMON subsystem provides kernel
-modules that built on top of the core layer using the API, which can be easily
-used by the user space end users.
+- :ref:`Operations Set <damon_operations_set>`: Implements fundamental
+ operations for DAMON that depends on the given monitoring target
+ address-space and available set of software/hardware primitives,
+- :ref:`Core <damon_core_logic>`: Implements core logics including monitoring
+ overhead/accuracy control and access-aware system operations on top of the
+ operations set layer, and
+- :ref:`Modules <damon_modules>`: Implements kernel modules for various
+ purposes that provides interfaces for the user space, on top of the core
+ layer.
.. _damon_operations_set:
@@ -70,11 +41,32 @@ used by the user space end users.
Operations Set Layer
====================
-The monitoring operations are defined in two parts:
+.. _damon_design_configurable_operations_set:
+
+For data access monitoring and additional low level work, DAMON needs a set of
+implementations for specific operations that are dependent on and optimized for
+the given target address space. For example, below two operations for access
+monitoring are address-space dependent.
1. Identification of the monitoring target address range for the address space.
2. Access check of specific address range in the target space.
+DAMON consolidates these implementations in a layer called DAMON Operations
+Set, and defines the interface between it and the upper layer. The upper layer
+is dedicated for DAMON's core logics including the mechanism for control of the
+monitoring accuracy and the overhead.
+
+Hence, DAMON can easily be extended for any address space and/or available
+hardware features by configuring the core logic to use the appropriate
+operations set. If there is no available operations set for a given purpose, a
+new operations set can be implemented following the interface between the
+layers.
+
+For example, physical memory, virtual memory, swap space, those for specific
+processes, NUMA nodes, files, and backing memory devices would be supportable.
+Also, if some architectures or devices support special optimized access check
+features, those will be easily configurable.
+
DAMON currently provides below three operation sets. Below two subsections
describe how those work.
@@ -82,6 +74,10 @@ describe how those work.
- fvaddr: Monitor fixed virtual address ranges
- paddr: Monitor the physical address space of the system
+To know how user-space can do the configuration via :ref:`DAMON sysfs interface
+<sysfs_interface>`, refer to :ref:`operations <sysfs_context>` file part of the
+documentation.
+
.. _damon_design_vaddr_target_regions_construction:
@@ -140,9 +136,12 @@ conflict with the reclaim logic using ``PG_idle`` and ``PG_young`` page flags,
as Idle page tracking does.
+.. _damon_core_logic:
+
Core Logics
===========
+.. _damon_design_monitoring:
Monitoring
----------
@@ -152,6 +151,10 @@ monitoring attributes, ``sampling interval``, ``aggregation interval``,
``update interval``, ``minimum number of regions``, and ``maximum number of
regions``.
+To know how user-space can set the attributes via :ref:`DAMON sysfs interface
+<sysfs_interface>`, refer to :ref:`monitoring_attrs <sysfs_monitoring_attrs>`
+part of the documentation.
+
Access Frequency Monitoring
~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -192,7 +195,7 @@ one page in the region is required to be checked. Thus, for each ``sampling
interval``, DAMON randomly picks one page in each region, waits for one
``sampling interval``, checks whether the page is accessed meanwhile, and
increases the access frequency counter of the region if so. The counter is
-called ``nr_regions`` of the region. Therefore, the monitoring overhead is
+called ``nr_accesses`` of the region. Therefore, the monitoring overhead is
controllable by setting the number of regions. DAMON allows users to set the
minimum and the maximum number of regions for the trade-off.
@@ -200,6 +203,8 @@ This scheme, however, cannot preserve the quality of the output if the
assumption is not guaranteed.
+.. _damon_design_adaptive_regions_adjustment:
+
Adaptive Regions Adjustment
~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -209,11 +214,18 @@ the data access pattern can be dynamically changed. This will result in low
monitoring quality. To keep the assumption as much as possible, DAMON
adaptively merges and splits each region based on their access frequency.
-For each ``aggregation interval``, it compares the access frequencies of
-adjacent regions and merges those if the frequency difference is small. Then,
-after it reports and clears the aggregated access frequency of each region, it
-splits each region into two or three regions if the total number of regions
-will not exceed the user-specified maximum number of regions after the split.
+For each ``aggregation interval``, it compares the access frequencies
+(``nr_accesses``) of adjacent regions. If the difference is small, and if the
+sum of the two regions' sizes is smaller than the size of total regions divided
+by the ``minimum number of regions``, DAMON merges the two regions. If the
+resulting number of total regions is still higher than ``maximum number of
+regions``, it repeats the merging with increasing access frequenceis difference
+threshold until the upper-limit of the number of regions is met, or the
+threshold becomes higher than possible maximum value (``aggregation interval``
+divided by ``sampling interval``). Then, after it reports and clears the
+aggregated access frequency of each region, it splits each region into two or
+three regions if the total number of regions will not exceed the user-specified
+maximum number of regions after the split.
In this way, DAMON provides its best-effort quality and minimal overhead while
keeping the bounds users set for their trade-off.
@@ -248,6 +260,116 @@ and applies it to monitoring operations-related data structures such as the
abstracted monitoring target memory area only for each of a user-specified time
interval (``update interval``).
+User-space can get the monitoring results via DAMON sysfs interface and/or
+tracepoints. For more details, please refer to the documentations for
+:ref:`DAMOS tried regions <sysfs_schemes_tried_regions>` and :ref:`tracepoint`,
+respectively.
+
+
+.. _damon_design_monitoring_params_tuning_guide:
+
+Monitoring Parameters Tuning Guide
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In short, set ``aggregation interval`` to capture meaningful amount of accesses
+for the purpose. The amount of accesses can be measured using ``nr_accesses``
+and ``age`` of regions in the aggregated monitoring results snapshot. The
+default value of the interval, ``100ms``, turns out to be too short in many
+cases. Set ``sampling interval`` proportional to ``aggregation interval``. By
+default, ``1/20`` is recommended as the ratio.
+
+``Aggregation interval`` should be set as the time interval that the workload
+can make an amount of accesses for the monitoring purpose, within the interval.
+If the interval is too short, only small number of accesses are captured. As a
+result, the monitoring results look everything is samely accessed only rarely.
+For many purposes, that would be useless. If it is too long, however, the time
+to converge regions with the :ref:`regions adjustment mechanism
+<damon_design_adaptive_regions_adjustment>` can be too long, depending on the
+time scale of the given purpose. This could happen if the workload is actually
+making only rare accesses but the user thinks the amount of accesses for the
+monitoring purpose too high. For such cases, the target amount of access to
+capture per ``aggregation interval`` should carefully reconsidered. Also, note
+that the captured amount of accesses is represented with not only
+``nr_accesses``, but also ``age``. For example, even if every region on the
+monitoring results show zero ``nr_accesses``, regions could still be
+distinguished using ``age`` values as the recency information.
+
+Hence the optimum value of ``aggregation interval`` depends on the access
+intensiveness of the workload. The user should tune the interval based on the
+amount of access that captured on each aggregated snapshot of the monitoring
+results.
+
+Note that the default value of the interval is 100 milliseconds, which is too
+short in many cases, especially on large systems.
+
+``Sampling interval`` defines the resolution of each aggregation. If it is set
+too large, monitoring results will look like every region was samely rarely
+accessed, or samely frequently accessed. That is, regions become
+undistinguishable based on access pattern, and therefore the results will be
+useless in many use cases. If ``sampling interval`` is too small, it will not
+degrade the resolution, but will increase the monitoring overhead. If it is
+appropriate enough to provide a resolution of the monitoring results that
+sufficient for the given purpose, it shouldn't be unnecessarily further
+lowered. It is recommended to be set proportional to ``aggregation interval``.
+By default, the ratio is set as ``1/20``, and it is still recommended.
+
+Based on the manual tuning guide, DAMON provides more intuitive knob-based
+intervals auto tuning mechanism. Please refer to :ref:`the design document of
+the feature <damon_design_monitoring_intervals_autotuning>` for detail.
+
+Refer to below documents for an example tuning based on the above guide.
+
+.. toctree::
+ :maxdepth: 1
+
+ monitoring_intervals_tuning_example
+
+
+.. _damon_design_monitoring_intervals_autotuning:
+
+Monitoring Intervals Auto-tuning
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DAMON provides automatic tuning of the ``sampling interval`` and ``aggregation
+interval`` based on the :ref:`the tuning guide idea
+<damon_design_monitoring_params_tuning_guide>`. The tuning mechanism allows
+users to set the aimed amount of access events to observe via DAMON within
+given time interval. The target can be specified by the user as a ratio of
+DAMON-observed access events to the theoretical maximum amount of the events
+(``access_bp``) that measured within a given number of aggregations
+(``aggrs``).
+
+The DAMON-observed access events are calculated in byte granularity based on
+DAMON :ref:`region assumption <damon_design_region_based_sampling>`. For
+example, if a region of size ``X`` bytes of ``Y`` ``nr_accesses`` is found, it
+means ``X * Y`` access events are observed by DAMON. Theoretical maximum
+access events for the region is calculated in same way, but replacing ``Y``
+with theoretical maximum ``nr_accesses``, which can be calculated as
+``aggregation interval / sampling interval``.
+
+The mechanism calculates the ratio of access events for ``aggrs`` aggregations,
+and increases or decrease the ``sampleing interval`` and ``aggregation
+interval`` in same ratio, if the observed access ratio is lower or higher than
+the target, respectively. The ratio of the intervals change is decided in
+proportion to the distance between current samples ratio and the target ratio.
+
+The user can further set the minimum and maximum ``sampling interval`` that can
+be set by the tuning mechanism using two parameters (``min_sample_us`` and
+``max_sample_us``). Because the tuning mechanism changes ``sampling interval``
+and ``aggregation interval`` in same ratio always, the minimum and maximum
+``aggregation interval`` after each of the tuning changes can automatically set
+together.
+
+The tuning is turned off by default, and need to be set explicitly by the user.
+As a rule of thumbs and the Parreto principle, 4% access samples ratio target
+is recommended. Note that Parreto principle (80/20 rule) has applied twice.
+That is, assumes 4% (20% of 20%) DAMON-observed access events ratio (source)
+to capture 64% (80% multipled by 80%) real access events (outcomes).
+
+To know how user-space can use this feature via :ref:`DAMON sysfs interface
+<sysfs_interface>`, refer to :ref:`intervals_goal <sysfs_scheme>` part of
+the documentation.
+
.. _damon_design_damos:
@@ -288,6 +410,10 @@ the access pattern of interest, and applies the user-desired operation actions
to the regions, for every user-specified time interval called
``apply_interval``.
+To know how user-space can set ``apply_interval`` via :ref:`DAMON sysfs
+interface <sysfs_interface>`, refer to :ref:`apply_interval_us <sysfs_scheme>`
+part of the documentation.
+
.. _damon_design_damos_action:
@@ -325,6 +451,10 @@ that supports each action are as below.
Supported by ``paddr`` operations set.
- ``lru_deprio``: Deprioritize the region on its LRU lists.
Supported by ``paddr`` operations set.
+ - ``migrate_hot``: Migrate the regions prioritizing warmer regions.
+ Supported by ``paddr`` operations set.
+ - ``migrate_cold``: Migrate the regions prioritizing colder regions.
+ Supported by ``paddr`` operations set.
- ``stat``: Do nothing but count the statistics.
Supported by all operations sets.
@@ -332,6 +462,10 @@ Applying the actions except ``stat`` to a region is considered as changing the
region's characteristics. Hence, DAMOS resets the age of regions when any such
actions are applied to those.
+To know how user-space can set the action via :ref:`DAMON sysfs interface
+<sysfs_interface>`, refer to :ref:`action <sysfs_scheme>` part of the
+documentation.
+
.. _damon_design_damos_access_pattern:
@@ -345,6 +479,10 @@ interest by setting minimum and maximum values of the three properties. If a
region's three properties are in the ranges, DAMOS classifies it as one of the
regions that the scheme is having an interest in.
+To know how user-space can set the access pattern via :ref:`DAMON sysfs
+interface <sysfs_interface>`, refer to :ref:`access_pattern
+<sysfs_access_pattern>` part of the documentation.
+
.. _damon_design_damos_quotas:
@@ -364,6 +502,10 @@ feature called quotas. It lets users specify an upper limit of time that DAMOS
can use for applying the action, and/or a maximum bytes of memory regions that
the action can be applied within a user-specified time duration.
+To know how user-space can set the basic quotas via :ref:`DAMON sysfs interface
+<sysfs_interface>`, refer to :ref:`quotas <sysfs_quotas>` part of the
+documentation.
+
.. _damon_design_damos_quotas_prioritization:
@@ -391,6 +533,10 @@ information to the underlying mechanism. Nevertheless, how and even whether
the weight will be respected are up to the underlying prioritization mechanism
implementation.
+To know how user-space can set the prioritization weights via :ref:`DAMON sysfs
+interface <sysfs_interface>`, refer to :ref:`weights <sysfs_quotas>` part of
+the documentation.
+
.. _damon_design_damos_quotas_auto_tuning:
@@ -404,10 +550,10 @@ aggressiveness (the quota) of the corresponding scheme. For example, if DAMOS
is under achieving the goal, DAMOS automatically increases the quota. If DAMOS
is over achieving the goal, it decreases the quota.
-The goal can be specified with three parameters, namely ``target_metric``,
-``target_value``, and ``current_value``. The auto-tuning mechanism tries to
-make ``current_value`` of ``target_metric`` be same to ``target_value``.
-Currently, two ``target_metric`` are provided.
+The goal can be specified with four parameters, namely ``target_metric``,
+``target_value``, ``current_value`` and ``nid``. The auto-tuning mechanism
+tries to make ``current_value`` of ``target_metric`` be same to
+``target_value``.
- ``user_input``: User-provided value. Users could use any metric that they
has interest in for the value. Use space main workload's latency or
@@ -419,6 +565,15 @@ Currently, two ``target_metric`` are provided.
in microseconds that measured from last quota reset to next quota reset.
DAMOS does the measurement on its own, so only ``target_value`` need to be
set by users at the initial time. In other words, DAMOS does self-feedback.
+- ``node_mem_used_bp``: Specific NUMA node's used memory ratio in bp (1/10,000).
+- ``node_mem_free_bp``: Specific NUMA node's free memory ratio in bp (1/10,000).
+
+``nid`` is optionally required for only ``node_mem_used_bp`` and
+``node_mem_free_bp`` to point the specific NUMA node.
+
+To know how user-space can set the tuning goal metric, the target value, and/or
+the current value via :ref:`DAMON sysfs interface <sysfs_interface>`, refer to
+:ref:`quota goals <sysfs_schemes_quota_goals>` part of the documentation.
.. _damon_design_damos_watermarks:
@@ -442,6 +597,10 @@ is activated. If all schemes are deactivated by the watermarks, the monitoring
is also deactivated. In this case, the DAMON worker thread only periodically
checks the watermarks and therefore incurs nearly zero overhead.
+To know how user-space can set the watermarks via :ref:`DAMON sysfs interface
+<sysfs_interface>`, refer to :ref:`watermarks <sysfs_watermarks>` part of the
+documentation.
+
.. _damon_design_damos_filters:
@@ -457,29 +616,121 @@ have a list of latency-critical processes.
To let users optimize DAMOS schemes with such special knowledge, DAMOS provides
a feature called DAMOS filters. The feature allows users to set an arbitrary
-number of filters for each scheme. Each filter specifies the type of target
-memory, and whether it should exclude the memory of the type (filter-out), or
-all except the memory of the type (filter-in).
-
-Currently, anonymous page, memory cgroup, address range, and DAMON monitoring
-target type filters are supported by the feature. Some filter target types
-require additional arguments. The memory cgroup filter type asks users to
-specify the file path of the memory cgroup for the filter. The address range
-type asks the start and end addresses of the range. The DAMON monitoring
-target type asks the index of the target from the context's monitoring targets
-list. Hence, users can apply specific schemes to only anonymous pages,
-non-anonymous pages, pages of specific cgroups, all pages excluding those of
-specific cgroups, pages in specific address range, pages in specific DAMON
-monitoring targets, and any combination of those.
-
-To handle filters efficiently, the address range and DAMON monitoring target
-type filters are handled by the core layer, while others are handled by
-operations set. If a memory region is filtered by a core layer-handled filter,
-it is not counted as the scheme has tried to the region. In contrast, if a
-memory regions is filtered by an operations set layer-handled filter, it is
-counted as the scheme has tried. The difference in accounting leads to changes
-in the statistics.
+number of filters for each scheme. Each filter specifies
+
+- a type of memory (``type``),
+- whether it is for the memory of the type or all except the type
+ (``matching``), and
+- whether it is to allow (include) or reject (exclude) applying
+ the scheme's action to the memory (``allow``).
+
+For efficient handling of filters, some types of filters are handled by the
+core layer, while others are handled by operations set. In the latter case,
+hence, support of the filter types depends on the DAMON operations set. In
+case of the core layer-handled filters, the memory regions that excluded by the
+filter are not counted as the scheme has tried to the region. In contrast, if
+a memory regions is filtered by an operations set layer-handled filter, it is
+counted as the scheme has tried. This difference affects the statistics.
+
+When multiple filters are installed, the group of filters that handled by the
+core layer are evaluated first. After that, the group of filters that handled
+by the operations layer are evaluated. Filters in each of the groups are
+evaluated in the installed order. If a part of memory is matched to one of the
+filter, next filters are ignored. If the part passes through the filters
+evaluation stage because it is not matched to any of the filters, applying the
+scheme's action to it depends on the last filter's allowance type. If the last
+filter was for allowing, the part of memory will be rejected, and vice versa.
+
+For example, let's assume 1) a filter for allowing anonymous pages and 2)
+another filter for rejecting young pages are installed in the order. If a page
+of a region that eligible to apply the scheme's action is an anonymous page,
+the scheme's action will be applied to the page regardless of whether it is
+young or not, since it matches with the first allow-filter. If the page is
+not anonymous but young, the scheme's action will not be applied, since the
+second reject-filter blocks it. If the page is neither anonymous nor young,
+the page will pass through the filters evaluation stage since there is no
+matching filter, and the action will be applied to the page.
+
+Below ``type`` of filters are currently supported.
+
+- Core layer handled
+ - addr
+ - Applied to pages that belonging to a given address range.
+ - target
+ - Applied to pages that belonging to a given DAMON monitoring target.
+- Operations layer handled, supported by only ``paddr`` operations set.
+ - anon
+ - Applied to pages that containing data that not stored in files.
+ - active
+ - Applied to active pages.
+ - memcg
+ - Applied to pages that belonging to a given cgroup.
+ - young
+ - Applied to pages that are accessed after the last access check from the
+ scheme.
+ - hugepage_size
+ - Applied to pages that managed in a given size range.
+ - unmapped
+ - Applied to pages that unmapped.
+
+To know how user-space can set the filters via :ref:`DAMON sysfs interface
+<sysfs_interface>`, refer to :ref:`filters <sysfs_filters>` part of the
+documentation.
+
+.. _damon_design_damos_stat:
+
+Statistics
+~~~~~~~~~~
+The statistics of DAMOS behaviors that designed to help monitoring, tuning and
+debugging of DAMOS.
+
+DAMOS accounts below statistics for each scheme, from the beginning of the
+scheme's execution.
+
+- ``nr_tried``: Total number of regions that the scheme is tried to be applied.
+- ``sz_trtied``: Total size of regions that the scheme is tried to be applied.
+- ``sz_ops_filter_passed``: Total bytes that passed operations set
+ layer-handled DAMOS filters.
+- ``nr_applied``: Total number of regions that the scheme is applied.
+- ``sz_applied``: Total size of regions that the scheme is applied.
+- ``qt_exceeds``: Total number of times the quota of the scheme has exceeded.
+
+"A scheme is tried to be applied to a region" means DAMOS core logic determined
+the region is eligible to apply the scheme's :ref:`action
+<damon_design_damos_action>`. The :ref:`access pattern
+<damon_design_damos_access_pattern>`, :ref:`quotas
+<damon_design_damos_quotas>`, :ref:`watermarks
+<damon_design_damos_watermarks>`, and :ref:`filters
+<damon_design_damos_filters>` that handled on core logic could affect this.
+The core logic will only ask the underlying :ref:`operation set
+<damon_operations_set>` to do apply the action to the region, so whether the
+action is really applied or not is unclear. That's why it is called "tried".
+
+"A scheme is applied to a region" means the :ref:`operation set
+<damon_operations_set>` has applied the action to at least a part of the
+region. The :ref:`filters <damon_design_damos_filters>` that handled by the
+operation set, and the types of the :ref:`action <damon_design_damos_action>`
+and the pages of the region can affect this. For example, if a filter is set
+to exclude anonymous pages and the region has only anonymous pages, or if the
+action is ``pageout`` while all pages of the region are unreclaimable, applying
+the action to the region will fail.
+
+To know how user-space can read the stats via :ref:`DAMON sysfs interface
+<sysfs_interface>`, refer to :ref:s`stats <sysfs_stats>` part of the
+documentation.
+
+Regions Walking
+~~~~~~~~~~~~~~~
+
+DAMOS feature allowing users access each region that a DAMOS action has just
+applied. Using this feature, DAMON :ref:`API <damon_design_api>` allows users
+access full properties of the regions including the access monitoring results
+and amount of the region's internal memory that passed the DAMOS filters.
+:ref:`DAMON sysfs interface <sysfs_interface>` also allows users read the data
+via special :ref:`files <sysfs_schemes_tried_regions>`.
+
+.. _damon_design_api:
Application Programming Interface
---------------------------------
@@ -493,6 +744,8 @@ interface, namely ``include/linux/damon.h``. Please refer to the API
:doc:`document </mm/damon/api>` for details of the interface.
+.. _damon_modules:
+
Modules
=======
@@ -512,25 +765,22 @@ General Purpose User Interface Modules
DAMON modules that provide user space ABIs for general purpose DAMON usage in
runtime.
-DAMON user interface modules, namely 'DAMON sysfs interface' and 'DAMON debugfs
-interface' are DAMON API user kernel modules that provide ABIs to the
-user-space. Please note that DAMON debugfs interface is currently deprecated.
-
-Like many other ABIs, the modules create files on sysfs and debugfs, allow
-users to specify their requests to and get the answers from DAMON by writing to
-and reading from the files. As a response to such I/O, DAMON user interface
-modules control DAMON and retrieve the results as user requested via the DAMON
-API, and return the results to the user-space.
+Like many other ABIs, the modules create files on pseudo file systems like
+'sysfs', allow users to specify their requests to and get the answers from
+DAMON by writing to and reading from the files. As a response to such I/O,
+DAMON user interface modules control DAMON and retrieve the results as user
+requested via the DAMON API, and return the results to the user-space.
The ABIs are designed to be used for user space applications development,
rather than human beings' fingers. Human users are recommended to use such
user space tools. One such Python-written user space tool is available at
-Github (https://github.com/awslabs/damo), Pypi
+Github (https://github.com/damonitor/damo), Pypi
(https://pypistats.org/packages/damo), and Fedora
(https://packages.fedoraproject.org/pkgs/python-damo/damo/).
-Please refer to the ABI :doc:`document </admin-guide/mm/damon/usage>` for
-details of the interfaces.
+Currently, one module for this type, namely 'DAMON sysfs interface' is
+available. Please refer to the ABI :ref:`doc <sysfs_interface>` for details of
+the interfaces.
Special-Purpose Access-aware Kernel Modules
@@ -538,8 +788,8 @@ Special-Purpose Access-aware Kernel Modules
DAMON modules that provide user space ABI for specific purpose DAMON usage.
-DAMON sysfs/debugfs user interfaces are for full control of all DAMON features
-in runtime. For each special-purpose system-wide data access-aware system
+DAMON user interface modules are for full control of all DAMON features in
+runtime. For each special-purpose system-wide data access-aware system
operations such as proactive reclamation or LRU lists balancing, the interfaces
could be simplified by removing unnecessary knobs for the specific purpose, and
extended for boot-time and even compile time control. Default values of DAMON
diff --git a/Documentation/mm/damon/index.rst b/Documentation/mm/damon/index.rst
index 5e0a50583500..31c1fa955b3d 100644
--- a/Documentation/mm/damon/index.rst
+++ b/Documentation/mm/damon/index.rst
@@ -1,12 +1,12 @@
.. SPDX-License-Identifier: GPL-2.0
-==========================
-DAMON: Data Access MONitor
-==========================
+================================================================
+DAMON: Data Access MONitoring and Access-aware System Operations
+================================================================
DAMON is a Linux kernel subsystem that provides a framework for data access
monitoring and the monitoring results based system operations. The core
-monitoring mechanisms of DAMON (refer to :doc:`design` for the detail) make it
+monitoring :ref:`mechanisms <damon_design_monitoring>` of DAMON make it
- *accurate* (the monitoring output is useful enough for DRAM level memory
management; It might not appropriate for CPU Cache levels, though),
@@ -16,15 +16,16 @@ monitoring mechanisms of DAMON (refer to :doc:`design` for the detail) make it
of the size of target workloads).
Using this framework, therefore, the kernel can operate system in an
-access-aware fashion. Because the features are also exposed to the user space,
-users who have special information about their workloads can write personalized
-applications for better understanding and optimizations of their workloads and
-systems.
+access-aware fashion. Because the features are also exposed to the :doc:`user
+space </admin-guide/mm/damon/index>`, users who have special information about
+their workloads can write personalized applications for better understanding
+and optimizations of their workloads and systems.
-For easier development of such systems, DAMON provides a feature called DAMOS
-(DAMon-based Operation Schemes) in addition to the monitoring. Using the
-feature, DAMON users in both kernel and user spaces can do access-aware system
-operations with no code but simple configurations.
+For easier development of such systems, DAMON provides a feature called
+:ref:`DAMOS <damon_design_damos>` (DAMon-based Operation Schemes) in addition
+to the monitoring. Using the feature, DAMON users in both kernel and :doc:`user
+spaces </admin-guide/mm/damon/index>` can do access-aware system operations
+with no code but simple configurations.
.. toctree::
:maxdepth: 2
@@ -33,3 +34,12 @@ operations with no code but simple configurations.
design
api
maintainer-profile
+
+To utilize and control DAMON from the user-space, please refer to the
+administration :doc:`guide </admin-guide/mm/damon/index>`.
+
+If you prefer academic papers for reading and citations, please use the papers
+from `HPDC'22 <https://dl.acm.org/doi/abs/10.1145/3502181.3531466>`_ and
+`Middleware19 Industry <https://dl.acm.org/doi/abs/10.1145/3366626.3368125>`_ .
+Note that those cover DAMON implementations in Linux v5.16 and v5.15,
+respectively.
diff --git a/Documentation/mm/damon/maintainer-profile.rst b/Documentation/mm/damon/maintainer-profile.rst
index 5a306e4de22e..ce3e98458339 100644
--- a/Documentation/mm/damon/maintainer-profile.rst
+++ b/Documentation/mm/damon/maintainer-profile.rst
@@ -7,22 +7,27 @@ The DAMON subsystem covers the files that are listed in 'DATA ACCESS MONITOR'
section of 'MAINTAINERS' file.
The mailing lists for the subsystem are damon@lists.linux.dev and
-linux-mm@kvack.org. Patches should be made against the mm-unstable tree [1]_
-whenever possible and posted to the mailing lists.
+linux-mm@kvack.org. Patches should be made against the `mm-unstable tree
+<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ whenever possible and posted
+to the mailing lists.
SCM Trees
---------
There are multiple Linux trees for DAMON development. Patches under
-development or testing are queued in damon/next [2]_ by the DAMON maintainer.
-Sufficiently reviewed patches will be queued in mm-unstable [1]_ by the memory
-management subsystem maintainer. After more sufficient tests, the patches will
-be queued in mm-stable [3]_ , and finally pull-requested to the mainline by the
-memory management subsystem maintainer.
-
-Note again the patches for review should be made against the mm-unstable
-tree [1]_ whenever possible. damon/next is only for preview of others' works
-in progress.
+development or testing are queued in `damon/next
+<https://git.kernel.org/sj/h/damon/next>`_ by the DAMON maintainer.
+Sufficiently reviewed patches will be queued in `mm-unstable
+<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ by the memory management
+subsystem maintainer. After more sufficient tests, the patches will be queued
+in `mm-stable <https://git.kernel.org/akpm/mm/h/mm-stable>`_, and finally
+pull-requested to the mainline by the memory management subsystem maintainer.
+
+Note again the patches for `mm-unstable tree
+<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ are queued by the memory
+management subsystem maintainer. If the patches requires some patches in
+`damon/next tree <https://git.kernel.org/sj/h/damon/next>`_ which not yet merged
+in mm-unstable, please make sure the requirement is clearly specified.
Submit checklist addendum
-------------------------
@@ -31,32 +36,70 @@ When making DAMON changes, you should do below.
- Build changes related outputs including kernel and documents.
- Ensure the builds introduce no new errors or warnings.
-- Run and ensure no new failures for DAMON selftests [4]_ and kunittests [5]_ .
+- Run and ensure no new failures for DAMON `selftests
+ <https://github.com/damonitor/damon-tests/blob/master/corr/run.sh#L49>`_ and
+ `kunittests
+ <https://github.com/damonitor/damon-tests/blob/master/corr/tests/kunit.sh>`_.
Further doing below and putting the results will be helpful.
-- Run damon-tests/corr [6]_ for normal changes.
-- Run damon-tests/perf [7]_ for performance changes.
+- Run `damon-tests/corr
+ <https://github.com/damonitor/damon-tests/tree/master/corr>`_ for normal
+ changes.
+- Run `damon-tests/perf
+ <https://github.com/damonitor/damon-tests/tree/master/perf>`_ for performance
+ changes.
Key cycle dates
---------------
-Patches can be sent anytime. Key cycle dates of the mm-unstable [1]_ and
-mm-stable [3]_ trees depend on the memory management subsystem maintainer.
+Patches can be sent anytime. Key cycle dates of the `mm-unstable
+<https://git.kernel.org/akpm/mm/h/mm-unstable>`_ and `mm-stable
+<https://git.kernel.org/akpm/mm/h/mm-stable>`_ trees depend on the memory
+management subsystem maintainer.
Review cadence
--------------
The DAMON maintainer does the work on the usual work hour (09:00 to 17:00,
-Mon-Fri) in PST. The response to patches will occasionally be slow. Do not
-hesitate to send a ping if you have not heard back within a week of sending a
-patch.
-
-
-.. [1] https://git.kernel.org/akpm/mm/h/mm-unstable
-.. [2] https://git.kernel.org/sj/h/damon/next
-.. [3] https://git.kernel.org/akpm/mm/h/mm-stable
-.. [4] https://github.com/awslabs/damon-tests/blob/master/corr/run.sh#L49
-.. [5] https://github.com/awslabs/damon-tests/blob/master/corr/tests/kunit.sh
-.. [6] https://github.com/awslabs/damon-tests/tree/master/corr
-.. [7] https://github.com/awslabs/damon-tests/tree/master/perf
+Mon-Fri) in PT (Pacific Time). The response to patches will occasionally be
+slow. Do not hesitate to send a ping if you have not heard back within a week
+of sending a patch.
+
+Mailing tool
+------------
+
+Like many other Linux kernel subsystems, DAMON uses the mailing lists
+(damon@lists.linux.dev and linux-mm@kvack.org) as the major communication
+channel. There is a simple tool called `HacKerMaiL
+<https://github.com/damonitor/hackermail>`_ (``hkml``), which is for people who
+are not very familiar with the mailing lists based communication. The tool
+could be particularly helpful for DAMON community members since it is developed
+and maintained by DAMON maintainer. The tool is also officially announced to
+support DAMON and general Linux kernel development workflow.
+
+In other words, `hkml <https://github.com/damonitor/hackermail>`_ is a mailing
+tool for DAMON community, which DAMON maintainer is committed to support.
+Please feel free to try and report issues or feature requests for the tool to
+the maintainer.
+
+Community meetup
+----------------
+
+DAMON community is maintaining two bi-weekly meetup series for community
+members who prefer synchronous conversations over mails.
+
+The first one is for any discussion between every community member. No
+reservation is needed.
+
+The seconds one is for discussions on specific topics between restricted
+members including the maintainer. The maintainer shares the available time
+slots, and attendees should reserve one of those at least 24 hours before the
+time slot, by reaching out to the maintainer.
+
+Schedules and available reservation time slots are available at the Google `doc
+<https://docs.google.com/document/d/1v43Kcj3ly4CYqmAkMaZzLiM2GEnWfgdGbZAH3mi2vpM/edit?usp=sharing>`_.
+There is also a public Google `calendar
+<https://calendar.google.com/calendar/u/0?cid=ZDIwOTA4YTMxNjc2MDQ3NTIyMmUzYTM5ZmQyM2U4NDA0ZGIwZjBiYmJlZGQxNDM0MmY4ZTRjOTE0NjdhZDRiY0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t>`_
+that has the events. Anyone can subscribe it. DAMON maintainer will also
+provide periodic reminder to the mailing list (damon@lists.linux.dev).
diff --git a/Documentation/mm/damon/monitoring_intervals_tuning_example.rst b/Documentation/mm/damon/monitoring_intervals_tuning_example.rst
new file mode 100644
index 000000000000..7207cbed591f
--- /dev/null
+++ b/Documentation/mm/damon/monitoring_intervals_tuning_example.rst
@@ -0,0 +1,247 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=================================================
+DAMON Moniting Interval Parameters Tuning Example
+=================================================
+
+DAMON's monitoring parameters need tuning based on given workload and the
+monitoring purpose. There is a :ref:`tuning guide
+<damon_design_monitoring_params_tuning_guide>` for that. This document
+provides an example tuning based on the guide.
+
+Setup
+=====
+
+For below example, DAMON of Linux kernel v6.11 and `damo
+<https://github.com/damonitor/damo>`_ (DAMON user-space tool) v2.5.9 was used to
+monitor and visualize access patterns on the physical address space of a system
+running a real-world server workload.
+
+5ms/100ms intervals: Too Short Interval
+=======================================
+
+Let's start by capturing the access pattern snapshot on the physical address
+space of the system using DAMON, with the default interval parameters (5
+milliseconds and 100 milliseconds for the sampling and the aggregation
+intervals, respectively). Wait ten minutes between the start of DAMON and
+the capturing of the snapshot, to show a meaningful time-wise access patterns.
+::
+
+ # damo start
+ # sleep 600
+ # damo record --snapshot 0 1
+ # damo stop
+
+Then, list the DAMON-found regions of different access patterns, sorted by the
+"access temperature". "Access temperature" is a metric representing the
+access-hotness of a region. It is calculated as a weighted sum of the access
+frequency and the age of the region. If the access frequency is 0 %, the
+temperature is multiplied by minus one. That is, if a region is not accessed,
+it gets minus temperature and it gets lower as not accessed for longer time.
+The sorting is in temperature-ascendint order, so the region at the top of the
+list is the coldest, and the one at the bottom is the hottest one. ::
+
+ # damo report access --sort_regions_by temperature
+ 0 addr 16.052 GiB size 5.985 GiB access 0 % age 5.900 s # coldest
+ 1 addr 22.037 GiB size 6.029 GiB access 0 % age 5.300 s
+ 2 addr 28.065 GiB size 6.045 GiB access 0 % age 5.200 s
+ 3 addr 10.069 GiB size 5.983 GiB access 0 % age 4.500 s
+ 4 addr 4.000 GiB size 6.069 GiB access 0 % age 4.400 s
+ 5 addr 62.008 GiB size 3.992 GiB access 0 % age 3.700 s
+ 6 addr 56.795 GiB size 5.213 GiB access 0 % age 3.300 s
+ 7 addr 39.393 GiB size 6.096 GiB access 0 % age 2.800 s
+ 8 addr 50.782 GiB size 6.012 GiB access 0 % age 2.800 s
+ 9 addr 34.111 GiB size 5.282 GiB access 0 % age 2.300 s
+ 10 addr 45.489 GiB size 5.293 GiB access 0 % age 1.800 s # hottest
+ total size: 62.000 GiB
+
+The list shows not seemingly hot regions, and only minimum access pattern
+diversity. Every region has zero access frequency. The number of region is
+10, which is the default ``min_nr_regions value``. Size of each region is also
+nearly identical. We can suspect this is because “adaptive regions adjustment”
+mechanism was not well working. As the guide suggested, we can get relative
+hotness of regions using ``age`` as the recency information. That would be
+better than nothing, but given the fact that the longest age is only about 6
+seconds while we waited about ten minutes, it is unclear how useful this will
+be.
+
+The temperature ranges to total size of regions of each range histogram
+visualization of the results also shows no interesting distribution pattern. ::
+
+ # damo report access --style temperature-sz-hist
+ <temperature> <total size>
+ [-,590,000,000, -,549,000,000) 5.985 GiB |********** |
+ [-,549,000,000, -,508,000,000) 12.074 GiB |********************|
+ [-,508,000,000, -,467,000,000) 0 B | |
+ [-,467,000,000, -,426,000,000) 12.052 GiB |********************|
+ [-,426,000,000, -,385,000,000) 0 B | |
+ [-,385,000,000, -,344,000,000) 3.992 GiB |******* |
+ [-,344,000,000, -,303,000,000) 5.213 GiB |********* |
+ [-,303,000,000, -,262,000,000) 12.109 GiB |********************|
+ [-,262,000,000, -,221,000,000) 5.282 GiB |********* |
+ [-,221,000,000, -,180,000,000) 0 B | |
+ [-,180,000,000, -,139,000,000) 5.293 GiB |********* |
+ total size: 62.000 GiB
+
+In short, the parameters provide poor quality monitoring results for hot
+regions detection. According to the :ref:`guide
+<damon_design_monitoring_params_tuning_guide>`, this is due to the too short
+aggregation interval.
+
+100ms/2s intervals: Starts Showing Small Hot Regions
+====================================================
+
+Following the guide, increase the interval 20 times (100 milliseocnds and 2
+seconds for sampling and aggregation intervals, respectively). ::
+
+ # damo start -s 100ms -a 2s
+ # sleep 600
+ # damo record --snapshot 0 1
+ # damo stop
+ # damo report access --sort_regions_by temperature
+ 0 addr 10.180 GiB size 6.117 GiB access 0 % age 7 m 8 s # coldest
+ 1 addr 49.275 GiB size 6.195 GiB access 0 % age 6 m 14 s
+ 2 addr 62.421 GiB size 3.579 GiB access 0 % age 6 m 4 s
+ 3 addr 40.154 GiB size 6.127 GiB access 0 % age 5 m 40 s
+ 4 addr 16.296 GiB size 6.182 GiB access 0 % age 5 m 32 s
+ 5 addr 34.254 GiB size 5.899 GiB access 0 % age 5 m 24 s
+ 6 addr 46.281 GiB size 2.995 GiB access 0 % age 5 m 20 s
+ 7 addr 28.420 GiB size 5.835 GiB access 0 % age 5 m 6 s
+ 8 addr 4.000 GiB size 6.180 GiB access 0 % age 4 m 16 s
+ 9 addr 22.478 GiB size 5.942 GiB access 0 % age 3 m 58 s
+ 10 addr 55.470 GiB size 915.645 MiB access 0 % age 3 m 6 s
+ 11 addr 56.364 GiB size 6.056 GiB access 0 % age 2 m 8 s
+ 12 addr 56.364 GiB size 4.000 KiB access 95 % age 16 s
+ 13 addr 49.275 GiB size 4.000 KiB access 100 % age 8 m 24 s # hottest
+ total size: 62.000 GiB
+ # damo report access --style temperature-sz-hist
+ <temperature> <total size>
+ [-42,800,000,000, -33,479,999,000) 22.018 GiB |***************** |
+ [-33,479,999,000, -24,159,998,000) 27.090 GiB |********************|
+ [-24,159,998,000, -14,839,997,000) 6.836 GiB |****** |
+ [-14,839,997,000, -5,519,996,000) 6.056 GiB |***** |
+ [-5,519,996,000, 3,800,005,000) 4.000 KiB |* |
+ [3,800,005,000, 13,120,006,000) 0 B | |
+ [13,120,006,000, 22,440,007,000) 0 B | |
+ [22,440,007,000, 31,760,008,000) 0 B | |
+ [31,760,008,000, 41,080,009,000) 0 B | |
+ [41,080,009,000, 50,400,010,000) 0 B | |
+ [50,400,010,000, 59,720,011,000) 4.000 KiB |* |
+ total size: 62.000 GiB
+
+DAMON found two distinct 4 KiB regions that pretty hot. The regions are also
+well aged. The hottest 4 KiB region was keeping the access frequency for about
+8 minutes, and the coldest region was keeping no access for about 7 minutes.
+The distribution on the histogram also looks like having a pattern.
+
+Especially, the finding of the 4 KiB regions among the 62 GiB total memory
+shows DAMON’s adaptive regions adjustment is working as designed.
+
+Still the number of regions is close to the ``min_nr_regions``, and sizes of
+cold regions are similar, though. Apparently it is improved, but it still has
+rooms to improve.
+
+400ms/8s intervals: Pretty Improved Results
+===========================================
+
+Increase the intervals four times (400 milliseconds and 8 seconds
+for sampling and aggregation intervals, respectively). ::
+
+ # damo start -s 400ms -a 8s
+ # sleep 600
+ # damo record --snapshot 0 1
+ # damo stop
+ # damo report access --sort_regions_by temperature
+ 0 addr 64.492 GiB size 1.508 GiB access 0 % age 6 m 48 s # coldest
+ 1 addr 21.749 GiB size 5.674 GiB access 0 % age 6 m 8 s
+ 2 addr 27.422 GiB size 5.801 GiB access 0 % age 6 m
+ 3 addr 49.431 GiB size 8.675 GiB access 0 % age 5 m 28 s
+ 4 addr 33.223 GiB size 5.645 GiB access 0 % age 5 m 12 s
+ 5 addr 58.321 GiB size 6.170 GiB access 0 % age 5 m 4 s
+ [...]
+ 25 addr 6.615 GiB size 297.531 MiB access 15 % age 0 ns
+ 26 addr 9.513 GiB size 12.000 KiB access 20 % age 0 ns
+ 27 addr 9.511 GiB size 108.000 KiB access 25 % age 0 ns
+ 28 addr 9.513 GiB size 20.000 KiB access 25 % age 0 ns
+ 29 addr 9.511 GiB size 12.000 KiB access 30 % age 0 ns
+ 30 addr 9.520 GiB size 4.000 KiB access 40 % age 0 ns
+ [...]
+ 41 addr 9.520 GiB size 4.000 KiB access 80 % age 56 s
+ 42 addr 9.511 GiB size 12.000 KiB access 100 % age 6 m 16 s
+ 43 addr 58.321 GiB size 4.000 KiB access 100 % age 6 m 24 s
+ 44 addr 9.512 GiB size 4.000 KiB access 100 % age 6 m 48 s
+ 45 addr 58.106 GiB size 4.000 KiB access 100 % age 6 m 48 s # hottest
+ total size: 62.000 GiB
+ # damo report access --style temperature-sz-hist
+ <temperature> <total size>
+ [-40,800,000,000, -32,639,999,000) 21.657 GiB |********************|
+ [-32,639,999,000, -24,479,998,000) 17.938 GiB |***************** |
+ [-24,479,998,000, -16,319,997,000) 16.885 GiB |**************** |
+ [-16,319,997,000, -8,159,996,000) 586.879 MiB |* |
+ [-8,159,996,000, 5,000) 4.946 GiB |***** |
+ [5,000, 8,160,006,000) 260.000 KiB |* |
+ [8,160,006,000, 16,320,007,000) 0 B | |
+ [16,320,007,000, 24,480,008,000) 0 B | |
+ [24,480,008,000, 32,640,009,000) 0 B | |
+ [32,640,009,000, 40,800,010,000) 16.000 KiB |* |
+ [40,800,010,000, 48,960,011,000) 8.000 KiB |* |
+ total size: 62.000 GiB
+
+The number of regions having different access patterns has significantly
+increased. Size of each region is also more varied. Total size of non-zero
+access frequency regions is also significantly increased. Maybe this is already
+good enough to make some meaningful memory management efficiency changes.
+
+800ms/16s intervals: Another bias
+=================================
+
+Further double the intervals (800 milliseconds and 16 seconds for sampling
+and aggregation intervals, respectively). The results is more improved for the
+hot regions detection, but starts looking degrading cold regions detection. ::
+
+ # damo start -s 800ms -a 16s
+ # sleep 600
+ # damo record --snapshot 0 1
+ # damo stop
+ # damo report access --sort_regions_by temperature
+ 0 addr 64.781 GiB size 1.219 GiB access 0 % age 4 m 48 s
+ 1 addr 24.505 GiB size 2.475 GiB access 0 % age 4 m 16 s
+ 2 addr 26.980 GiB size 504.273 MiB access 0 % age 4 m
+ 3 addr 29.443 GiB size 2.462 GiB access 0 % age 4 m
+ 4 addr 37.264 GiB size 5.645 GiB access 0 % age 4 m
+ 5 addr 31.905 GiB size 5.359 GiB access 0 % age 3 m 44 s
+ [...]
+ 20 addr 8.711 GiB size 40.000 KiB access 5 % age 2 m 40 s
+ 21 addr 27.473 GiB size 1.970 GiB access 5 % age 4 m
+ 22 addr 48.185 GiB size 4.625 GiB access 5 % age 4 m
+ 23 addr 47.304 GiB size 902.117 MiB access 10 % age 4 m
+ 24 addr 8.711 GiB size 4.000 KiB access 100 % age 4 m
+ 25 addr 20.793 GiB size 3.713 GiB access 5 % age 4 m 16 s
+ 26 addr 8.773 GiB size 4.000 KiB access 100 % age 4 m 16 s
+ total size: 62.000 GiB
+ # damo report access --style temperature-sz-hist
+ <temperature> <total size>
+ [-28,800,000,000, -23,359,999,000) 12.294 GiB |***************** |
+ [-23,359,999,000, -17,919,998,000) 9.753 GiB |************* |
+ [-17,919,998,000, -12,479,997,000) 15.131 GiB |********************|
+ [-12,479,997,000, -7,039,996,000) 0 B | |
+ [-7,039,996,000, -1,599,995,000) 7.506 GiB |********** |
+ [-1,599,995,000, 3,840,006,000) 6.127 GiB |********* |
+ [3,840,006,000, 9,280,007,000) 0 B | |
+ [9,280,007,000, 14,720,008,000) 136.000 KiB |* |
+ [14,720,008,000, 20,160,009,000) 40.000 KiB |* |
+ [20,160,009,000, 25,600,010,000) 11.188 GiB |*************** |
+ [25,600,010,000, 31,040,011,000) 4.000 KiB |* |
+ total size: 62.000 GiB
+
+It found more non-zero access frequency regions. The number of regions is still
+much higher than the ``min_nr_regions``, but it is reduced from that of the
+previous setup. And apparently the distribution seems bit biased to hot
+regions.
+
+Conclusion
+==========
+
+With the above experimental tuning results, we can conclude the theory and the
+guide makes sense to at least this workload, and could be applied to similar
+cases.
diff --git a/Documentation/mm/hmm.rst b/Documentation/mm/hmm.rst
index 0595098a74d9..7d61b7a8b65b 100644
--- a/Documentation/mm/hmm.rst
+++ b/Documentation/mm/hmm.rst
@@ -66,7 +66,7 @@ combinatorial explosion in the library entry points.
Finally, with the advance of high level language constructs (in C++ but in
other languages too) it is now possible for the compiler to leverage GPUs and
other devices without programmer knowledge. Some compiler identified patterns
-are only do-able with a shared address space. It is also more reasonable to use
+are only doable with a shared address space. It is also more reasonable to use
a shared address space for all other patterns.
@@ -267,7 +267,7 @@ functions are designed to make drivers easier to write and to centralize common
code across drivers.
Before migrating pages to device private memory, special device private
-``struct page`` need to be created. These will be used as special "swap"
+``struct page`` needs to be created. These will be used as special "swap"
page table entries so that a CPU process will fault if it tries to access
a page that has been migrated to device private memory.
@@ -322,7 +322,7 @@ between device driver specific code and shared common code:
The ``invalidate_range_start()`` callback is passed a
``struct mmu_notifier_range`` with the ``event`` field set to
``MMU_NOTIFY_MIGRATE`` and the ``owner`` field set to
- the ``args->pgmap_owner`` field passed to migrate_vma_setup(). This is
+ the ``args->pgmap_owner`` field passed to migrate_vma_setup(). This
allows the device driver to skip the invalidation callback and only
invalidate device private MMU mappings that are actually migrating.
This is explained more in the next section.
@@ -400,12 +400,12 @@ Exclusive access memory
Some devices have features such as atomic PTE bits that can be used to implement
atomic access to system memory. To support atomic operations to a shared virtual
memory page such a device needs access to that page which is exclusive of any
-userspace access from the CPU. The ``make_device_exclusive_range()`` function
+userspace access from the CPU. The ``make_device_exclusive()`` function
can be used to make a memory range inaccessible from userspace.
This replaces all mappings for pages in the given range with special swap
entries. Any attempt to access the swap entry results in a fault which is
-resovled by replacing the entry with the original mapping. A driver gets
+resolved by replacing the entry with the original mapping. A driver gets
notified that the mapping has been changed by MMU notifiers, after which point
it will no longer have exclusive access to the page. Exclusive access is
guaranteed to last until the driver drops the page lock and page reference, at
@@ -431,7 +431,7 @@ Same decision was made for memory cgroup. Device memory pages are accounted
against same memory cgroup a regular page would be accounted to. This does
simplify migration to and from device memory. This also means that migration
back from device memory to regular memory cannot fail because it would
-go above memory cgroup limit. We might revisit this choice latter on once we
+go above memory cgroup limit. We might revisit this choice later on once we
get more experience in how device memory is used and its impact on memory
resource control.
diff --git a/Documentation/mm/index.rst b/Documentation/mm/index.rst
index 31d2ac306438..d3ada3e45e10 100644
--- a/Documentation/mm/index.rst
+++ b/Documentation/mm/index.rst
@@ -2,9 +2,6 @@
Memory Management Documentation
===============================
-Memory Management Guide
-=======================
-
This is a guide to understanding the memory management subsystem
of Linux. If you are looking for advice on simply allocating memory,
see the :ref:`memory_allocation`. For controlling and tuning guides,
@@ -27,19 +24,20 @@ see the :doc:`admin guide <../admin-guide/mm/index>`.
shmfs
oom
-Legacy Documentation
-====================
+Unsorted Documentation
+======================
-This is a collection of older documents about the Linux memory management
-(MM) subsystem internals with different level of details ranging from
-notes and mailing list responses for elaborating descriptions of data
-structures and algorithms. It should all be integrated nicely into the
-above structured documentation, or deleted if it has served its purpose.
+This is a collection of unsorted documents about the Linux memory management
+(MM) subsystem internals with different level of details ranging from notes and
+mailing list responses for elaborating descriptions of data structures and
+algorithms. It should all be integrated nicely into the above structured
+documentation, or deleted if it has served its purpose.
.. toctree::
:maxdepth: 1
active_mm
+ allocation-profiling
arch_pgtable_helpers
balance
damon/index
@@ -64,5 +62,4 @@ above structured documentation, or deleted if it has served its purpose.
unevictable-lru
vmalloced-kernel-stacks
vmemmap_dedup
- z3fold
zsmalloc
diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst
index f1ce67a26615..519b35a4caf5 100644
--- a/Documentation/mm/page_migration.rst
+++ b/Documentation/mm/page_migration.rst
@@ -63,15 +63,15 @@ and then a low level description of how the low level details work.
In kernel use of migrate_pages()
================================
-1. Remove pages from the LRU.
+1. Remove folios from the LRU.
- Lists of pages to be migrated are generated by scanning over
- pages and moving them into lists. This is done by
- calling isolate_lru_page().
- Calling isolate_lru_page() increases the references to the page
- so that it cannot vanish while the page migration occurs.
+ Lists of folios to be migrated are generated by scanning over
+ folios and moving them into lists. This is done by
+ calling folio_isolate_lru().
+ Calling folio_isolate_lru() increases the references to the folio
+ so that it cannot vanish while the folio migration occurs.
It also prevents the swapper or other scans from encountering
- the page.
+ the folio.
2. We need to have a function of type new_folio_t that can be
passed to migrate_pages(). This function should figure out
@@ -84,10 +84,10 @@ In kernel use of migrate_pages()
How migrate_pages() works
=========================
-migrate_pages() does several passes over its list of pages. A page is moved
-if all references to a page are removable at the time. The page has
-already been removed from the LRU via isolate_lru_page() and the refcount
-is increased so that the page cannot be freed while page migration occurs.
+migrate_pages() does several passes over its list of folios. A folio is moved
+if all references to a folio are removable at the time. The folio has
+already been removed from the LRU via folio_isolate_lru() and the refcount
+is increased so that the folio cannot be freed while folio migration occurs.
Steps:
diff --git a/Documentation/mm/page_table_check.rst b/Documentation/mm/page_table_check.rst
index c12838ce6b8d..c59f22eb6a0f 100644
--- a/Documentation/mm/page_table_check.rst
+++ b/Documentation/mm/page_table_check.rst
@@ -14,7 +14,7 @@ Page table check performs extra verifications at the time when new pages become
accessible from the userspace by getting their page table entries (PTEs PMDs
etc.) added into the table.
-In case of detected corruption, the kernel is crashed. There is a small
+In case of most detected corruption, the kernel is crashed. There is a small
performance and memory overhead associated with the page table check. Therefore,
it is disabled by default, but can be optionally enabled on systems where the
extra hardening outweighs the performance costs. Also, because page table check
@@ -22,6 +22,13 @@ is synchronous, it can help with debugging double map memory corruption issues,
by crashing kernel at the time wrong mapping occurs instead of later which is
often the case with memory corruptions bugs.
+It can also be used to do page table entry checks over various flags, dump
+warnings when illegal combinations of entry flags are detected. Currently,
+userfaultfd is the only user of such to sanity check wr-protect bit against
+any writable flags. Illegal flag combinations will not directly cause data
+corruption in this case immediately, but that will cause read-only data to
+be writable, leading to corrupt when the page content is later modified.
+
Double mapping detection logic
==============================
diff --git a/Documentation/mm/page_tables.rst b/Documentation/mm/page_tables.rst
index be47b192a596..e7c69cc32493 100644
--- a/Documentation/mm/page_tables.rst
+++ b/Documentation/mm/page_tables.rst
@@ -29,7 +29,7 @@ address.
With a page granularity of 4KB and a address range of 32 bits, pfn 0 is at
address 0x00000000, pfn 1 is at address 0x00001000, pfn 2 is at 0x00002000
and so on until we reach pfn 0xfffff at 0xfffff000. With 16KB pages pfs are
-at 0x00004000, 0x00008000 ... 0xffffc000 and pfn goes from 0 to 0x3fffff.
+at 0x00004000, 0x00008000 ... 0xffffc000 and pfn goes from 0 to 0x3ffff.
As you can see, with 4KB pages the page base address uses bits 12-31 of the
address, and this is why `PAGE_SHIFT` in this case is defined as 12 and
diff --git a/Documentation/mm/physical_memory.rst b/Documentation/mm/physical_memory.rst
index 531e73b003dd..d3ac106e6b14 100644
--- a/Documentation/mm/physical_memory.rst
+++ b/Documentation/mm/physical_memory.rst
@@ -33,7 +33,7 @@ The entire physical address space is partitioned into one or more blocks
called zones which represent ranges within memory. These ranges are usually
determined by architectural constraints for accessing the physical memory.
The memory range within a node that corresponds to a particular zone is
-described by a ``struct zone``, typedeffed to ``zone_t``. Each zone has
+described by a ``struct zone``. Each zone has
one of the types described below.
* ``ZONE_DMA`` and ``ZONE_DMA32`` historically represented memory suitable for
@@ -338,10 +338,272 @@ Statistics
Zones
=====
+As we have mentioned, each zone in memory is described by a ``struct zone``
+which is an element of the ``node_zones`` array of the node it belongs to.
+``struct zone`` is the core data structure of the page allocator. A zone
+represents a range of physical memory and may have holes.
+
+The page allocator uses the GFP flags, see :ref:`mm-api-gfp-flags`, specified by
+a memory allocation to determine the highest zone in a node from which the
+memory allocation can allocate memory. The page allocator first allocates memory
+from that zone, if the page allocator can't allocate the requested amount of
+memory from the zone, it will allocate memory from the next lower zone in the
+node, the process continues up to and including the lowest zone. For example, if
+a node contains ``ZONE_DMA32``, ``ZONE_NORMAL`` and ``ZONE_MOVABLE`` and the
+highest zone of a memory allocation is ``ZONE_MOVABLE``, the order of the zones
+from which the page allocator allocates memory is ``ZONE_MOVABLE`` >
+``ZONE_NORMAL`` > ``ZONE_DMA32``.
+
+At runtime, free pages in a zone are in the Per-CPU Pagesets (PCP) or free areas
+of the zone. The Per-CPU Pagesets are a vital mechanism in the kernel's memory
+management system. By handling most frequent allocations and frees locally on
+each CPU, the Per-CPU Pagesets improve performance and scalability, especially
+on systems with many cores. The page allocator in the kernel employs a two-step
+strategy for memory allocation, starting with the Per-CPU Pagesets before
+falling back to the buddy allocator. Pages are transferred between the Per-CPU
+Pagesets and the global free areas (managed by the buddy allocator) in batches.
+This minimizes the overhead of frequent interactions with the global buddy
+allocator.
+
+Architecture specific code calls free_area_init() to initializes zones.
+
+Zone structure
+--------------
+The zones structure ``struct zone`` is defined in ``include/linux/mmzone.h``.
+Here we briefly describe fields of this structure:
-.. admonition:: Stub
+General
+~~~~~~~
- This section is incomplete. Please list and describe the appropriate fields.
+``_watermark``
+ The watermarks for this zone. When the amount of free pages in a zone is below
+ the min watermark, boosting is ignored, an allocation may trigger direct
+ reclaim and direct compaction, it is also used to throttle direct reclaim.
+ When the amount of free pages in a zone is below the low watermark, kswapd is
+ woken up. When the amount of free pages in a zone is above the high watermark,
+ kswapd stops reclaiming (a zone is balanced) when the
+ ``NUMA_BALANCING_MEMORY_TIERING`` bit of ``sysctl_numa_balancing_mode`` is not
+ set. The promo watermark is used for memory tiering and NUMA balancing. When
+ the amount of free pages in a zone is above the promo watermark, kswapd stops
+ reclaiming when the ``NUMA_BALANCING_MEMORY_TIERING`` bit of
+ ``sysctl_numa_balancing_mode`` is set. The watermarks are set by
+ ``__setup_per_zone_wmarks()``. The min watermark is calculated according to
+ ``vm.min_free_kbytes`` sysctl. The other three watermarks are set according
+ to the distance between two watermarks. The distance itself is calculated
+ taking ``vm.watermark_scale_factor`` sysctl into account.
+
+``watermark_boost``
+ The number of pages which are used to boost watermarks to increase reclaim
+ pressure to reduce the likelihood of future fallbacks and wake kswapd now
+ as the node may be balanced overall and kswapd will not wake naturally.
+
+``nr_reserved_highatomic``
+ The number of pages which are reserved for high-order atomic allocations.
+
+``nr_free_highatomic``
+ The number of free pages in reserved highatomic pageblocks
+
+``lowmem_reserve``
+ The array of the amounts of the memory reserved in this zone for memory
+ allocations. For example, if the highest zone a memory allocation can
+ allocate memory from is ``ZONE_MOVABLE``, the amount of memory reserved in
+ this zone for this allocation is ``lowmem_reserve[ZONE_MOVABLE]`` when
+ attempting to allocate memory from this zone. This is a mechanism the page
+ allocator uses to prevent allocations which could use ``highmem`` from using
+ too much ``lowmem``. For some specialised workloads on ``highmem`` machines,
+ it is dangerous for the kernel to allow process memory to be allocated from
+ the ``lowmem`` zone. This is because that memory could then be pinned via the
+ ``mlock()`` system call, or by unavailability of swapspace.
+ ``vm.lowmem_reserve_ratio`` sysctl determines how aggressive the kernel is in
+ defending these lower zones. This array is recalculated by
+ ``setup_per_zone_lowmem_reserve()`` at runtime if ``vm.lowmem_reserve_ratio``
+ sysctl changes.
+
+``node``
+ The index of the node this zone belongs to. Available only when
+ ``CONFIG_NUMA`` is enabled because there is only one zone in a UMA system.
+
+``zone_pgdat``
+ Pointer to the ``struct pglist_data`` of the node this zone belongs to.
+
+``per_cpu_pageset``
+ Pointer to the Per-CPU Pagesets (PCP) allocated and initialized by
+ ``setup_zone_pageset()``. By handling most frequent allocations and frees
+ locally on each CPU, PCP improves performance and scalability on systems with
+ many cores.
+
+``pageset_high_min``
+ Copied to the ``high_min`` of the Per-CPU Pagesets for faster access.
+
+``pageset_high_max``
+ Copied to the ``high_max`` of the Per-CPU Pagesets for faster access.
+
+``pageset_batch``
+ Copied to the ``batch`` of the Per-CPU Pagesets for faster access. The
+ ``batch``, ``high_min`` and ``high_max`` of the Per-CPU Pagesets are used to
+ calculate the number of elements the Per-CPU Pagesets obtain from the buddy
+ allocator under a single hold of the lock for efficiency. They are also used
+ to decide if the Per-CPU Pagesets return pages to the buddy allocator in page
+ free process.
+
+``pageblock_flags``
+ The pointer to the flags for the pageblocks in the zone (see
+ ``include/linux/pageblock-flags.h`` for flags list). The memory is allocated
+ in ``setup_usemap()``. Each pageblock occupies ``NR_PAGEBLOCK_BITS`` bits.
+ Defined only when ``CONFIG_FLATMEM`` is enabled. The flags is stored in
+ ``mem_section`` when ``CONFIG_SPARSEMEM`` is enabled.
+
+``zone_start_pfn``
+ The start pfn of the zone. It is initialized by
+ ``calculate_node_totalpages()``.
+
+``managed_pages``
+ The present pages managed by the buddy system, which is calculated as:
+ ``managed_pages`` = ``present_pages`` - ``reserved_pages``, ``reserved_pages``
+ includes pages allocated by the memblock allocator. It should be used by page
+ allocator and vm scanner to calculate all kinds of watermarks and thresholds.
+ It is accessed using ``atomic_long_xxx()`` functions. It is initialized in
+ ``free_area_init_core()`` and then is reinitialized when memblock allocator
+ frees pages into buddy system.
+
+``spanned_pages``
+ The total pages spanned by the zone, including holes, which is calculated as:
+ ``spanned_pages`` = ``zone_end_pfn`` - ``zone_start_pfn``. It is initialized
+ by ``calculate_node_totalpages()``.
+
+``present_pages``
+ The physical pages existing within the zone, which is calculated as:
+ ``present_pages`` = ``spanned_pages`` - ``absent_pages`` (pages in holes). It
+ may be used by memory hotplug or memory power management logic to figure out
+ unmanaged pages by checking (``present_pages`` - ``managed_pages``). Write
+ access to ``present_pages`` at runtime should be protected by
+ ``mem_hotplug_begin/done()``. Any reader who can't tolerant drift of
+ ``present_pages`` should use ``get_online_mems()`` to get a stable value. It
+ is initialized by ``calculate_node_totalpages()``.
+
+``present_early_pages``
+ The present pages existing within the zone located on memory available since
+ early boot, excluding hotplugged memory. Defined only when
+ ``CONFIG_MEMORY_HOTPLUG`` is enabled and initialized by
+ ``calculate_node_totalpages()``.
+
+``cma_pages``
+ The pages reserved for CMA use. These pages behave like ``ZONE_MOVABLE`` when
+ they are not used for CMA. Defined only when ``CONFIG_CMA`` is enabled.
+
+``name``
+ The name of the zone. It is a pointer to the corresponding element of
+ the ``zone_names`` array.
+
+``nr_isolate_pageblock``
+ Number of isolated pageblocks. It is used to solve incorrect freepage counting
+ problem due to racy retrieving migratetype of pageblock. Protected by
+ ``zone->lock``. Defined only when ``CONFIG_MEMORY_ISOLATION`` is enabled.
+
+``span_seqlock``
+ The seqlock to protect ``zone_start_pfn`` and ``spanned_pages``. It is a
+ seqlock because it has to be read outside of ``zone->lock``, and it is done in
+ the main allocator path. However, the seqlock is written quite infrequently.
+ Defined only when ``CONFIG_MEMORY_HOTPLUG`` is enabled.
+
+``initialized``
+ The flag indicating if the zone is initialized. Set by
+ ``init_currently_empty_zone()`` during boot.
+
+``free_area``
+ The array of free areas, where each element corresponds to a specific order
+ which is a power of two. The buddy allocator uses this structure to manage
+ free memory efficiently. When allocating, it tries to find the smallest
+ sufficient block, if the smallest sufficient block is larger than the
+ requested size, it will be recursively split into the next smaller blocks
+ until the required size is reached. When a page is freed, it may be merged
+ with its buddy to form a larger block. It is initialized by
+ ``zone_init_free_lists()``.
+
+``unaccepted_pages``
+ The list of pages to be accepted. All pages on the list are ``MAX_PAGE_ORDER``.
+ Defined only when ``CONFIG_UNACCEPTED_MEMORY`` is enabled.
+
+``flags``
+ The zone flags. The least three bits are used and defined by
+ ``enum zone_flags``. ``ZONE_BOOSTED_WATERMARK`` (bit 0): zone recently boosted
+ watermarks. Cleared when kswapd is woken. ``ZONE_RECLAIM_ACTIVE`` (bit 1):
+ kswapd may be scanning the zone. ``ZONE_BELOW_HIGH`` (bit 2): zone is below
+ high watermark.
+
+``lock``
+ The main lock that protects the internal data structures of the page allocator
+ specific to the zone, especially protects ``free_area``.
+
+``percpu_drift_mark``
+ When free pages are below this point, additional steps are taken when reading
+ the number of free pages to avoid per-cpu counter drift allowing watermarks
+ to be breached. It is updated in ``refresh_zone_stat_thresholds()``.
+
+Compaction control
+~~~~~~~~~~~~~~~~~~
+
+``compact_cached_free_pfn``
+ The PFN where compaction free scanner should start in the next scan.
+
+``compact_cached_migrate_pfn``
+ The PFNs where compaction migration scanner should start in the next scan.
+ This array has two elements: the first one is used in ``MIGRATE_ASYNC`` mode,
+ and the other one is used in ``MIGRATE_SYNC`` mode.
+
+``compact_init_migrate_pfn``
+ The initial migration PFN which is initialized to 0 at boot time, and to the
+ first pageblock with migratable pages in the zone after a full compaction
+ finishes. It is used to check if a scan is a whole zone scan or not.
+
+``compact_init_free_pfn``
+ The initial free PFN which is initialized to 0 at boot time and to the last
+ pageblock with free ``MIGRATE_MOVABLE`` pages in the zone. It is used to check
+ if it is the start of a scan.
+
+``compact_considered``
+ The number of compactions attempted since last failure. It is reset in
+ ``defer_compaction()`` when a compaction fails to result in a page allocation
+ success. It is increased by 1 in ``compaction_deferred()`` when a compaction
+ should be skipped. ``compaction_deferred()`` is called before
+ ``compact_zone()`` is called, ``compaction_defer_reset()`` is called when
+ ``compact_zone()`` returns ``COMPACT_SUCCESS``, ``defer_compaction()`` is
+ called when ``compact_zone()`` returns ``COMPACT_PARTIAL_SKIPPED`` or
+ ``COMPACT_COMPLETE``.
+
+``compact_defer_shift``
+ The number of compactions skipped before trying again is
+ ``1<<compact_defer_shift``. It is increased by 1 in ``defer_compaction()``.
+ It is reset in ``compaction_defer_reset()`` when a direct compaction results
+ in a page allocation success. Its maximum value is ``COMPACT_MAX_DEFER_SHIFT``.
+
+``compact_order_failed``
+ The minimum compaction failed order. It is set in ``compaction_defer_reset()``
+ when a compaction succeeds and in ``defer_compaction()`` when a compaction
+ fails to result in a page allocation success.
+
+``compact_blockskip_flush``
+ Set to true when compaction migration scanner and free scanner meet, which
+ means the ``PB_migrate_skip`` bits should be cleared.
+
+``contiguous``
+ Set to true when the zone is contiguous (in other words, no hole).
+
+Statistics
+~~~~~~~~~~
+
+``vm_stat``
+ VM statistics for the zone. The items tracked are defined by
+ ``enum zone_stat_item``.
+
+``vm_numa_event``
+ VM NUMA event statistics for the zone. The items tracked are defined by
+ ``enum numa_stat_item``.
+
+``per_cpu_zonestats``
+ Per-CPU VM statistics for the zone. It records VM statistics and VM NUMA event
+ statistics on a per-CPU basis. It reduces updates to the global ``vm_stat``
+ and ``vm_numa_event`` fields of the zone to improve performance.
.. _pages:
diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst
index e8618fbc62c9..e6756e78b476 100644
--- a/Documentation/mm/process_addrs.rst
+++ b/Documentation/mm/process_addrs.rst
@@ -3,3 +3,865 @@
=================
Process Addresses
=================
+
+.. toctree::
+ :maxdepth: 3
+
+
+Userland memory ranges are tracked by the kernel via Virtual Memory Areas or
+'VMA's of type :c:struct:`!struct vm_area_struct`.
+
+Each VMA describes a virtually contiguous memory range with identical
+attributes, each described by a :c:struct:`!struct vm_area_struct`
+object. Userland access outside of VMAs is invalid except in the case where an
+adjacent stack VMA could be extended to contain the accessed address.
+
+All VMAs are contained within one and only one virtual address space, described
+by a :c:struct:`!struct mm_struct` object which is referenced by all tasks (that is,
+threads) which share the virtual address space. We refer to this as the
+:c:struct:`!mm`.
+
+Each mm object contains a maple tree data structure which describes all VMAs
+within the virtual address space.
+
+.. note:: An exception to this is the 'gate' VMA which is provided by
+ architectures which use :c:struct:`!vsyscall` and is a global static
+ object which does not belong to any specific mm.
+
+-------
+Locking
+-------
+
+The kernel is designed to be highly scalable against concurrent read operations
+on VMA **metadata** so a complicated set of locks are required to ensure memory
+corruption does not occur.
+
+.. note:: Locking VMAs for their metadata does not have any impact on the memory
+ they describe nor the page tables that map them.
+
+Terminology
+-----------
+
+* **mmap locks** - Each MM has a read/write semaphore :c:member:`!mmap_lock`
+ which locks at a process address space granularity which can be acquired via
+ :c:func:`!mmap_read_lock`, :c:func:`!mmap_write_lock` and variants.
+* **VMA locks** - The VMA lock is at VMA granularity (of course) which behaves
+ as a read/write semaphore in practice. A VMA read lock is obtained via
+ :c:func:`!lock_vma_under_rcu` (and unlocked via :c:func:`!vma_end_read`) and a
+ write lock via :c:func:`!vma_start_write` (all VMA write locks are unlocked
+ automatically when the mmap write lock is released). To take a VMA write lock
+ you **must** have already acquired an :c:func:`!mmap_write_lock`.
+* **rmap locks** - When trying to access VMAs through the reverse mapping via a
+ :c:struct:`!struct address_space` or :c:struct:`!struct anon_vma` object
+ (reachable from a folio via :c:member:`!folio->mapping`). VMAs must be stabilised via
+ :c:func:`!anon_vma_[try]lock_read` or :c:func:`!anon_vma_[try]lock_write` for
+ anonymous memory and :c:func:`!i_mmap_[try]lock_read` or
+ :c:func:`!i_mmap_[try]lock_write` for file-backed memory. We refer to these
+ locks as the reverse mapping locks, or 'rmap locks' for brevity.
+
+We discuss page table locks separately in the dedicated section below.
+
+The first thing **any** of these locks achieve is to **stabilise** the VMA
+within the MM tree. That is, guaranteeing that the VMA object will not be
+deleted from under you nor modified (except for some specific fields
+described below).
+
+Stabilising a VMA also keeps the address space described by it around.
+
+Lock usage
+----------
+
+If you want to **read** VMA metadata fields or just keep the VMA stable, you
+must do one of the following:
+
+* Obtain an mmap read lock at the MM granularity via :c:func:`!mmap_read_lock` (or a
+ suitable variant), unlocking it with a matching :c:func:`!mmap_read_unlock` when
+ you're done with the VMA, *or*
+* Try to obtain a VMA read lock via :c:func:`!lock_vma_under_rcu`. This tries to
+ acquire the lock atomically so might fail, in which case fall-back logic is
+ required to instead obtain an mmap read lock if this returns :c:macro:`!NULL`,
+ *or*
+* Acquire an rmap lock before traversing the locked interval tree (whether
+ anonymous or file-backed) to obtain the required VMA.
+
+If you want to **write** VMA metadata fields, then things vary depending on the
+field (we explore each VMA field in detail below). For the majority you must:
+
+* Obtain an mmap write lock at the MM granularity via :c:func:`!mmap_write_lock` (or a
+ suitable variant), unlocking it with a matching :c:func:`!mmap_write_unlock` when
+ you're done with the VMA, *and*
+* Obtain a VMA write lock via :c:func:`!vma_start_write` for each VMA you wish to
+ modify, which will be released automatically when :c:func:`!mmap_write_unlock` is
+ called.
+* If you want to be able to write to **any** field, you must also hide the VMA
+ from the reverse mapping by obtaining an **rmap write lock**.
+
+VMA locks are special in that you must obtain an mmap **write** lock **first**
+in order to obtain a VMA **write** lock. A VMA **read** lock however can be
+obtained without any other lock (:c:func:`!lock_vma_under_rcu` will acquire then
+release an RCU lock to lookup the VMA for you).
+
+This constrains the impact of writers on readers, as a writer can interact with
+one VMA while a reader interacts with another simultaneously.
+
+.. note:: The primary users of VMA read locks are page fault handlers, which
+ means that without a VMA write lock, page faults will run concurrent with
+ whatever you are doing.
+
+Examining all valid lock states:
+
+.. table::
+
+ ========= ======== ========= ======= ===== =========== ==========
+ mmap lock VMA lock rmap lock Stable? Read? Write most? Write all?
+ ========= ======== ========= ======= ===== =========== ==========
+ \- \- \- N N N N
+ \- R \- Y Y N N
+ \- \- R/W Y Y N N
+ R/W \-/R \-/R/W Y Y N N
+ W W \-/R Y Y Y N
+ W W W Y Y Y Y
+ ========= ======== ========= ======= ===== =========== ==========
+
+.. warning:: While it's possible to obtain a VMA lock while holding an mmap read lock,
+ attempting to do the reverse is invalid as it can result in deadlock - if
+ another task already holds an mmap write lock and attempts to acquire a VMA
+ write lock that will deadlock on the VMA read lock.
+
+All of these locks behave as read/write semaphores in practice, so you can
+obtain either a read or a write lock for each of these.
+
+.. note:: Generally speaking, a read/write semaphore is a class of lock which
+ permits concurrent readers. However a write lock can only be obtained
+ once all readers have left the critical region (and pending readers
+ made to wait).
+
+ This renders read locks on a read/write semaphore concurrent with other
+ readers and write locks exclusive against all others holding the semaphore.
+
+VMA fields
+^^^^^^^^^^
+
+We can subdivide :c:struct:`!struct vm_area_struct` fields by their purpose, which makes it
+easier to explore their locking characteristics:
+
+.. note:: We exclude VMA lock-specific fields here to avoid confusion, as these
+ are in effect an internal implementation detail.
+
+.. table:: Virtual layout fields
+
+ ===================== ======================================== ===========
+ Field Description Write lock
+ ===================== ======================================== ===========
+ :c:member:`!vm_start` Inclusive start virtual address of range mmap write,
+ VMA describes. VMA write,
+ rmap write.
+ :c:member:`!vm_end` Exclusive end virtual address of range mmap write,
+ VMA describes. VMA write,
+ rmap write.
+ :c:member:`!vm_pgoff` Describes the page offset into the file, mmap write,
+ the original page offset within the VMA write,
+ virtual address space (prior to any rmap write.
+ :c:func:`!mremap`), or PFN if a PFN map
+ and the architecture does not support
+ :c:macro:`!CONFIG_ARCH_HAS_PTE_SPECIAL`.
+ ===================== ======================================== ===========
+
+These fields describes the size, start and end of the VMA, and as such cannot be
+modified without first being hidden from the reverse mapping since these fields
+are used to locate VMAs within the reverse mapping interval trees.
+
+.. table:: Core fields
+
+ ============================ ======================================== =========================
+ Field Description Write lock
+ ============================ ======================================== =========================
+ :c:member:`!vm_mm` Containing mm_struct. None - written once on
+ initial map.
+ :c:member:`!vm_page_prot` Architecture-specific page table mmap write, VMA write.
+ protection bits determined from VMA
+ flags.
+ :c:member:`!vm_flags` Read-only access to VMA flags describing N/A
+ attributes of the VMA, in union with
+ private writable
+ :c:member:`!__vm_flags`.
+ :c:member:`!__vm_flags` Private, writable access to VMA flags mmap write, VMA write.
+ field, updated by
+ :c:func:`!vm_flags_*` functions.
+ :c:member:`!vm_file` If the VMA is file-backed, points to a None - written once on
+ struct file object describing the initial map.
+ underlying file, if anonymous then
+ :c:macro:`!NULL`.
+ :c:member:`!vm_ops` If the VMA is file-backed, then either None - Written once on
+ the driver or file-system provides a initial map by
+ :c:struct:`!struct vm_operations_struct` :c:func:`!f_ops->mmap()`.
+ object describing callbacks to be
+ invoked on VMA lifetime events.
+ :c:member:`!vm_private_data` A :c:member:`!void *` field for Handled by driver.
+ driver-specific metadata.
+ ============================ ======================================== =========================
+
+These are the core fields which describe the MM the VMA belongs to and its attributes.
+
+.. table:: Config-specific fields
+
+ ================================= ===================== ======================================== ===============
+ Field Configuration option Description Write lock
+ ================================= ===================== ======================================== ===============
+ :c:member:`!anon_name` CONFIG_ANON_VMA_NAME A field for storing a mmap write,
+ :c:struct:`!struct anon_vma_name` VMA write.
+ object providing a name for anonymous
+ mappings, or :c:macro:`!NULL` if none
+ is set or the VMA is file-backed. The
+ underlying object is reference counted
+ and can be shared across multiple VMAs
+ for scalability.
+ :c:member:`!swap_readahead_info` CONFIG_SWAP Metadata used by the swap mechanism mmap read,
+ to perform readahead. This field is swap-specific
+ accessed atomically. lock.
+ :c:member:`!vm_policy` CONFIG_NUMA :c:type:`!mempolicy` object which mmap write,
+ describes the NUMA behaviour of the VMA write.
+ VMA. The underlying object is reference
+ counted.
+ :c:member:`!numab_state` CONFIG_NUMA_BALANCING :c:type:`!vma_numab_state` object which mmap read,
+ describes the current state of numab-specific
+ NUMA balancing in relation to this VMA. lock.
+ Updated under mmap read lock by
+ :c:func:`!task_numa_work`.
+ :c:member:`!vm_userfaultfd_ctx` CONFIG_USERFAULTFD Userfaultfd context wrapper object of mmap write,
+ type :c:type:`!vm_userfaultfd_ctx`, VMA write.
+ either of zero size if userfaultfd is
+ disabled, or containing a pointer
+ to an underlying
+ :c:type:`!userfaultfd_ctx` object which
+ describes userfaultfd metadata.
+ ================================= ===================== ======================================== ===============
+
+These fields are present or not depending on whether the relevant kernel
+configuration option is set.
+
+.. table:: Reverse mapping fields
+
+ =================================== ========================================= ============================
+ Field Description Write lock
+ =================================== ========================================= ============================
+ :c:member:`!shared.rb` A red/black tree node used, if the mmap write, VMA write,
+ mapping is file-backed, to place the VMA i_mmap write.
+ in the
+ :c:member:`!struct address_space->i_mmap`
+ red/black interval tree.
+ :c:member:`!shared.rb_subtree_last` Metadata used for management of the mmap write, VMA write,
+ interval tree if the VMA is file-backed. i_mmap write.
+ :c:member:`!anon_vma_chain` List of pointers to both forked/CoW’d mmap read, anon_vma write.
+ :c:type:`!anon_vma` objects and
+ :c:member:`!vma->anon_vma` if it is
+ non-:c:macro:`!NULL`.
+ :c:member:`!anon_vma` :c:type:`!anon_vma` object used by When :c:macro:`NULL` and
+ anonymous folios mapped exclusively to setting non-:c:macro:`NULL`:
+ this VMA. Initially set by mmap read, page_table_lock.
+ :c:func:`!anon_vma_prepare` serialised
+ by the :c:macro:`!page_table_lock`. This When non-:c:macro:`NULL` and
+ is set as soon as any page is faulted in. setting :c:macro:`NULL`:
+ mmap write, VMA write,
+ anon_vma write.
+ =================================== ========================================= ============================
+
+These fields are used to both place the VMA within the reverse mapping, and for
+anonymous mappings, to be able to access both related :c:struct:`!struct anon_vma` objects
+and the :c:struct:`!struct anon_vma` in which folios mapped exclusively to this VMA should
+reside.
+
+.. note:: If a file-backed mapping is mapped with :c:macro:`!MAP_PRIVATE` set
+ then it can be in both the :c:type:`!anon_vma` and :c:type:`!i_mmap`
+ trees at the same time, so all of these fields might be utilised at
+ once.
+
+Page tables
+-----------
+
+We won't speak exhaustively on the subject but broadly speaking, page tables map
+virtual addresses to physical ones through a series of page tables, each of
+which contain entries with physical addresses for the next page table level
+(along with flags), and at the leaf level the physical addresses of the
+underlying physical data pages or a special entry such as a swap entry,
+migration entry or other special marker. Offsets into these pages are provided
+by the virtual address itself.
+
+In Linux these are divided into five levels - PGD, P4D, PUD, PMD and PTE. Huge
+pages might eliminate one or two of these levels, but when this is the case we
+typically refer to the leaf level as the PTE level regardless.
+
+.. note:: In instances where the architecture supports fewer page tables than
+ five the kernel cleverly 'folds' page table levels, that is stubbing
+ out functions related to the skipped levels. This allows us to
+ conceptually act as if there were always five levels, even if the
+ compiler might, in practice, eliminate any code relating to missing
+ ones.
+
+There are four key operations typically performed on page tables:
+
+1. **Traversing** page tables - Simply reading page tables in order to traverse
+ them. This only requires that the VMA is kept stable, so a lock which
+ establishes this suffices for traversal (there are also lockless variants
+ which eliminate even this requirement, such as :c:func:`!gup_fast`).
+2. **Installing** page table mappings - Whether creating a new mapping or
+ modifying an existing one in such a way as to change its identity. This
+ requires that the VMA is kept stable via an mmap or VMA lock (explicitly not
+ rmap locks).
+3. **Zapping/unmapping** page table entries - This is what the kernel calls
+ clearing page table mappings at the leaf level only, whilst leaving all page
+ tables in place. This is a very common operation in the kernel performed on
+ file truncation, the :c:macro:`!MADV_DONTNEED` operation via
+ :c:func:`!madvise`, and others. This is performed by a number of functions
+ including :c:func:`!unmap_mapping_range` and :c:func:`!unmap_mapping_pages`.
+ The VMA need only be kept stable for this operation.
+4. **Freeing** page tables - When finally the kernel removes page tables from a
+ userland process (typically via :c:func:`!free_pgtables`) extreme care must
+ be taken to ensure this is done safely, as this logic finally frees all page
+ tables in the specified range, ignoring existing leaf entries (it assumes the
+ caller has both zapped the range and prevented any further faults or
+ modifications within it).
+
+.. note:: Modifying mappings for reclaim or migration is performed under rmap
+ lock as it, like zapping, does not fundamentally modify the identity
+ of what is being mapped.
+
+**Traversing** and **zapping** ranges can be performed holding any one of the
+locks described in the terminology section above - that is the mmap lock, the
+VMA lock or either of the reverse mapping locks.
+
+That is - as long as you keep the relevant VMA **stable** - you are good to go
+ahead and perform these operations on page tables (though internally, kernel
+operations that perform writes also acquire internal page table locks to
+serialise - see the page table implementation detail section for more details).
+
+When **installing** page table entries, the mmap or VMA lock must be held to
+keep the VMA stable. We explore why this is in the page table locking details
+section below.
+
+.. warning:: Page tables are normally only traversed in regions covered by VMAs.
+ If you want to traverse page tables in areas that might not be
+ covered by VMAs, heavier locking is required.
+ See :c:func:`!walk_page_range_novma` for details.
+
+**Freeing** page tables is an entirely internal memory management operation and
+has special requirements (see the page freeing section below for more details).
+
+.. warning:: When **freeing** page tables, it must not be possible for VMAs
+ containing the ranges those page tables map to be accessible via
+ the reverse mapping.
+
+ The :c:func:`!free_pgtables` function removes the relevant VMAs
+ from the reverse mappings, but no other VMAs can be permitted to be
+ accessible and span the specified range.
+
+Lock ordering
+-------------
+
+As we have multiple locks across the kernel which may or may not be taken at the
+same time as explicit mm or VMA locks, we have to be wary of lock inversion, and
+the **order** in which locks are acquired and released becomes very important.
+
+.. note:: Lock inversion occurs when two threads need to acquire multiple locks,
+ but in doing so inadvertently cause a mutual deadlock.
+
+ For example, consider thread 1 which holds lock A and tries to acquire lock B,
+ while thread 2 holds lock B and tries to acquire lock A.
+
+ Both threads are now deadlocked on each other. However, had they attempted to
+ acquire locks in the same order, one would have waited for the other to
+ complete its work and no deadlock would have occurred.
+
+The opening comment in :c:macro:`!mm/rmap.c` describes in detail the required
+ordering of locks within memory management code:
+
+.. code-block::
+
+ inode->i_rwsem (while writing or truncating, not reading or faulting)
+ mm->mmap_lock
+ mapping->invalidate_lock (in filemap_fault)
+ folio_lock
+ hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share, see hugetlbfs below)
+ vma_start_write
+ mapping->i_mmap_rwsem
+ anon_vma->rwsem
+ mm->page_table_lock or pte_lock
+ swap_lock (in swap_duplicate, swap_info_get)
+ mmlist_lock (in mmput, drain_mmlist and others)
+ mapping->private_lock (in block_dirty_folio)
+ i_pages lock (widely used)
+ lruvec->lru_lock (in folio_lruvec_lock_irq)
+ inode->i_lock (in set_page_dirty's __mark_inode_dirty)
+ bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty)
+ sb_lock (within inode_lock in fs/fs-writeback.c)
+ i_pages lock (widely used, in set_page_dirty,
+ in arch-dependent flush_dcache_mmap_lock,
+ within bdi.wb->list_lock in __sync_single_inode)
+
+There is also a file-system specific lock ordering comment located at the top of
+:c:macro:`!mm/filemap.c`:
+
+.. code-block::
+
+ ->i_mmap_rwsem (truncate_pagecache)
+ ->private_lock (__free_pte->block_dirty_folio)
+ ->swap_lock (exclusive_swap_page, others)
+ ->i_pages lock
+
+ ->i_rwsem
+ ->invalidate_lock (acquired by fs in truncate path)
+ ->i_mmap_rwsem (truncate->unmap_mapping_range)
+
+ ->mmap_lock
+ ->i_mmap_rwsem
+ ->page_table_lock or pte_lock (various, mainly in memory.c)
+ ->i_pages lock (arch-dependent flush_dcache_mmap_lock)
+
+ ->mmap_lock
+ ->invalidate_lock (filemap_fault)
+ ->lock_page (filemap_fault, access_process_vm)
+
+ ->i_rwsem (generic_perform_write)
+ ->mmap_lock (fault_in_readable->do_page_fault)
+
+ bdi->wb.list_lock
+ sb_lock (fs/fs-writeback.c)
+ ->i_pages lock (__sync_single_inode)
+
+ ->i_mmap_rwsem
+ ->anon_vma.lock (vma_merge)
+
+ ->anon_vma.lock
+ ->page_table_lock or pte_lock (anon_vma_prepare and various)
+
+ ->page_table_lock or pte_lock
+ ->swap_lock (try_to_unmap_one)
+ ->private_lock (try_to_unmap_one)
+ ->i_pages lock (try_to_unmap_one)
+ ->lruvec->lru_lock (follow_page_mask->mark_page_accessed)
+ ->lruvec->lru_lock (check_pte_range->folio_isolate_lru)
+ ->private_lock (folio_remove_rmap_pte->set_page_dirty)
+ ->i_pages lock (folio_remove_rmap_pte->set_page_dirty)
+ bdi.wb->list_lock (folio_remove_rmap_pte->set_page_dirty)
+ ->inode->i_lock (folio_remove_rmap_pte->set_page_dirty)
+ bdi.wb->list_lock (zap_pte_range->set_page_dirty)
+ ->inode->i_lock (zap_pte_range->set_page_dirty)
+ ->private_lock (zap_pte_range->block_dirty_folio)
+
+Please check the current state of these comments which may have changed since
+the time of writing of this document.
+
+------------------------------
+Locking Implementation Details
+------------------------------
+
+.. warning:: Locking rules for PTE-level page tables are very different from
+ locking rules for page tables at other levels.
+
+Page table locking details
+--------------------------
+
+In addition to the locks described in the terminology section above, we have
+additional locks dedicated to page tables:
+
+* **Higher level page table locks** - Higher level page tables, that is PGD, P4D
+ and PUD each make use of the process address space granularity
+ :c:member:`!mm->page_table_lock` lock when modified.
+
+* **Fine-grained page table locks** - PMDs and PTEs each have fine-grained locks
+ either kept within the folios describing the page tables or allocated
+ separated and pointed at by the folios if :c:macro:`!ALLOC_SPLIT_PTLOCKS` is
+ set. The PMD spin lock is obtained via :c:func:`!pmd_lock`, however PTEs are
+ mapped into higher memory (if a 32-bit system) and carefully locked via
+ :c:func:`!pte_offset_map_lock`.
+
+These locks represent the minimum required to interact with each page table
+level, but there are further requirements.
+
+Importantly, note that on a **traversal** of page tables, sometimes no such
+locks are taken. However, at the PTE level, at least concurrent page table
+deletion must be prevented (using RCU) and the page table must be mapped into
+high memory, see below.
+
+Whether care is taken on reading the page table entries depends on the
+architecture, see the section on atomicity below.
+
+Locking rules
+^^^^^^^^^^^^^
+
+We establish basic locking rules when interacting with page tables:
+
+* When changing a page table entry the page table lock for that page table
+ **must** be held, except if you can safely assume nobody can access the page
+ tables concurrently (such as on invocation of :c:func:`!free_pgtables`).
+* Reads from and writes to page table entries must be *appropriately*
+ atomic. See the section on atomicity below for details.
+* Populating previously empty entries requires that the mmap or VMA locks are
+ held (read or write), doing so with only rmap locks would be dangerous (see
+ the warning below).
+* As mentioned previously, zapping can be performed while simply keeping the VMA
+ stable, that is holding any one of the mmap, VMA or rmap locks.
+
+.. warning:: Populating previously empty entries is dangerous as, when unmapping
+ VMAs, :c:func:`!vms_clear_ptes` has a window of time between
+ zapping (via :c:func:`!unmap_vmas`) and freeing page tables (via
+ :c:func:`!free_pgtables`), where the VMA is still visible in the
+ rmap tree. :c:func:`!free_pgtables` assumes that the zap has
+ already been performed and removes PTEs unconditionally (along with
+ all other page tables in the freed range), so installing new PTE
+ entries could leak memory and also cause other unexpected and
+ dangerous behaviour.
+
+There are additional rules applicable when moving page tables, which we discuss
+in the section on this topic below.
+
+PTE-level page tables are different from page tables at other levels, and there
+are extra requirements for accessing them:
+
+* On 32-bit architectures, they may be in high memory (meaning they need to be
+ mapped into kernel memory to be accessible).
+* When empty, they can be unlinked and RCU-freed while holding an mmap lock or
+ rmap lock for reading in combination with the PTE and PMD page table locks.
+ In particular, this happens in :c:func:`!retract_page_tables` when handling
+ :c:macro:`!MADV_COLLAPSE`.
+ So accessing PTE-level page tables requires at least holding an RCU read lock;
+ but that only suffices for readers that can tolerate racing with concurrent
+ page table updates such that an empty PTE is observed (in a page table that
+ has actually already been detached and marked for RCU freeing) while another
+ new page table has been installed in the same location and filled with
+ entries. Writers normally need to take the PTE lock and revalidate that the
+ PMD entry still refers to the same PTE-level page table.
+ If the writer does not care whether it is the same PTE-level page table, it
+ can take the PMD lock and revalidate that the contents of pmd entry still meet
+ the requirements. In particular, this also happens in :c:func:`!retract_page_tables`
+ when handling :c:macro:`!MADV_COLLAPSE`.
+
+To access PTE-level page tables, a helper like :c:func:`!pte_offset_map_lock` or
+:c:func:`!pte_offset_map` can be used depending on stability requirements.
+These map the page table into kernel memory if required, take the RCU lock, and
+depending on variant, may also look up or acquire the PTE lock.
+See the comment on :c:func:`!__pte_offset_map_lock`.
+
+Atomicity
+^^^^^^^^^
+
+Regardless of page table locks, the MMU hardware concurrently updates accessed
+and dirty bits (perhaps more, depending on architecture). Additionally, page
+table traversal operations in parallel (though holding the VMA stable) and
+functionality like GUP-fast locklessly traverses (that is reads) page tables,
+without even keeping the VMA stable at all.
+
+When performing a page table traversal and keeping the VMA stable, whether a
+read must be performed once and only once or not depends on the architecture
+(for instance x86-64 does not require any special precautions).
+
+If a write is being performed, or if a read informs whether a write takes place
+(on an installation of a page table entry say, for instance in
+:c:func:`!__pud_install`), special care must always be taken. In these cases we
+can never assume that page table locks give us entirely exclusive access, and
+must retrieve page table entries once and only once.
+
+If we are reading page table entries, then we need only ensure that the compiler
+does not rearrange our loads. This is achieved via :c:func:`!pXXp_get`
+functions - :c:func:`!pgdp_get`, :c:func:`!p4dp_get`, :c:func:`!pudp_get`,
+:c:func:`!pmdp_get`, and :c:func:`!ptep_get`.
+
+Each of these uses :c:func:`!READ_ONCE` to guarantee that the compiler reads
+the page table entry only once.
+
+However, if we wish to manipulate an existing page table entry and care about
+the previously stored data, we must go further and use an hardware atomic
+operation as, for example, in :c:func:`!ptep_get_and_clear`.
+
+Equally, operations that do not rely on the VMA being held stable, such as
+GUP-fast (see :c:func:`!gup_fast` and its various page table level handlers like
+:c:func:`!gup_fast_pte_range`), must very carefully interact with page table
+entries, using functions such as :c:func:`!ptep_get_lockless` and equivalent for
+higher level page table levels.
+
+Writes to page table entries must also be appropriately atomic, as established
+by :c:func:`!set_pXX` functions - :c:func:`!set_pgd`, :c:func:`!set_p4d`,
+:c:func:`!set_pud`, :c:func:`!set_pmd`, and :c:func:`!set_pte`.
+
+Equally functions which clear page table entries must be appropriately atomic,
+as in :c:func:`!pXX_clear` functions - :c:func:`!pgd_clear`,
+:c:func:`!p4d_clear`, :c:func:`!pud_clear`, :c:func:`!pmd_clear`, and
+:c:func:`!pte_clear`.
+
+Page table installation
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Page table installation is performed with the VMA held stable explicitly by an
+mmap or VMA lock in read or write mode (see the warning in the locking rules
+section for details as to why).
+
+When allocating a P4D, PUD or PMD and setting the relevant entry in the above
+PGD, P4D or PUD, the :c:member:`!mm->page_table_lock` must be held. This is
+acquired in :c:func:`!__p4d_alloc`, :c:func:`!__pud_alloc` and
+:c:func:`!__pmd_alloc` respectively.
+
+.. note:: :c:func:`!__pmd_alloc` actually invokes :c:func:`!pud_lock` and
+ :c:func:`!pud_lockptr` in turn, however at the time of writing it ultimately
+ references the :c:member:`!mm->page_table_lock`.
+
+Allocating a PTE will either use the :c:member:`!mm->page_table_lock` or, if
+:c:macro:`!USE_SPLIT_PMD_PTLOCKS` is defined, a lock embedded in the PMD
+physical page metadata in the form of a :c:struct:`!struct ptdesc`, acquired by
+:c:func:`!pmd_ptdesc` called from :c:func:`!pmd_lock` and ultimately
+:c:func:`!__pte_alloc`.
+
+Finally, modifying the contents of the PTE requires special treatment, as the
+PTE page table lock must be acquired whenever we want stable and exclusive
+access to entries contained within a PTE, especially when we wish to modify
+them.
+
+This is performed via :c:func:`!pte_offset_map_lock` which carefully checks to
+ensure that the PTE hasn't changed from under us, ultimately invoking
+:c:func:`!pte_lockptr` to obtain a spin lock at PTE granularity contained within
+the :c:struct:`!struct ptdesc` associated with the physical PTE page. The lock
+must be released via :c:func:`!pte_unmap_unlock`.
+
+.. note:: There are some variants on this, such as
+ :c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable but
+ for brevity we do not explore this. See the comment for
+ :c:func:`!__pte_offset_map_lock` for more details.
+
+When modifying data in ranges we typically only wish to allocate higher page
+tables as necessary, using these locks to avoid races or overwriting anything,
+and set/clear data at the PTE level as required (for instance when page faulting
+or zapping).
+
+A typical pattern taken when traversing page table entries to install a new
+mapping is to optimistically determine whether the page table entry in the table
+above is empty, if so, only then acquiring the page table lock and checking
+again to see if it was allocated underneath us.
+
+This allows for a traversal with page table locks only being taken when
+required. An example of this is :c:func:`!__pud_alloc`.
+
+At the leaf page table, that is the PTE, we can't entirely rely on this pattern
+as we have separate PMD and PTE locks and a THP collapse for instance might have
+eliminated the PMD entry as well as the PTE from under us.
+
+This is why :c:func:`!__pte_offset_map_lock` locklessly retrieves the PMD entry
+for the PTE, carefully checking it is as expected, before acquiring the
+PTE-specific lock, and then *again* checking that the PMD entry is as expected.
+
+If a THP collapse (or similar) were to occur then the lock on both pages would
+be acquired, so we can ensure this is prevented while the PTE lock is held.
+
+Installing entries this way ensures mutual exclusion on write.
+
+Page table freeing
+^^^^^^^^^^^^^^^^^^
+
+Tearing down page tables themselves is something that requires significant
+care. There must be no way that page tables designated for removal can be
+traversed or referenced by concurrent tasks.
+
+It is insufficient to simply hold an mmap write lock and VMA lock (which will
+prevent racing faults, and rmap operations), as a file-backed mapping can be
+truncated under the :c:struct:`!struct address_space->i_mmap_rwsem` alone.
+
+As a result, no VMA which can be accessed via the reverse mapping (either
+through the :c:struct:`!struct anon_vma->rb_root` or the :c:member:`!struct
+address_space->i_mmap` interval trees) can have its page tables torn down.
+
+The operation is typically performed via :c:func:`!free_pgtables`, which assumes
+either the mmap write lock has been taken (as specified by its
+:c:member:`!mm_wr_locked` parameter), or that the VMA is already unreachable.
+
+It carefully removes the VMA from all reverse mappings, however it's important
+that no new ones overlap these or any route remain to permit access to addresses
+within the range whose page tables are being torn down.
+
+Additionally, it assumes that a zap has already been performed and steps have
+been taken to ensure that no further page table entries can be installed between
+the zap and the invocation of :c:func:`!free_pgtables`.
+
+Since it is assumed that all such steps have been taken, page table entries are
+cleared without page table locks (in the :c:func:`!pgd_clear`, :c:func:`!p4d_clear`,
+:c:func:`!pud_clear`, and :c:func:`!pmd_clear` functions.
+
+.. note:: It is possible for leaf page tables to be torn down independent of
+ the page tables above it as is done by
+ :c:func:`!retract_page_tables`, which is performed under the i_mmap
+ read lock, PMD, and PTE page table locks, without this level of care.
+
+Page table moving
+^^^^^^^^^^^^^^^^^
+
+Some functions manipulate page table levels above PMD (that is PUD, P4D and PGD
+page tables). Most notable of these is :c:func:`!mremap`, which is capable of
+moving higher level page tables.
+
+In these instances, it is required that **all** locks are taken, that is
+the mmap lock, the VMA lock and the relevant rmap locks.
+
+You can observe this in the :c:func:`!mremap` implementation in the functions
+:c:func:`!take_rmap_locks` and :c:func:`!drop_rmap_locks` which perform the rmap
+side of lock acquisition, invoked ultimately by :c:func:`!move_page_tables`.
+
+VMA lock internals
+------------------
+
+Overview
+^^^^^^^^
+
+VMA read locking is entirely optimistic - if the lock is contended or a competing
+write has started, then we do not obtain a read lock.
+
+A VMA **read** lock is obtained by :c:func:`!lock_vma_under_rcu`, which first
+calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU
+critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`,
+before releasing the RCU lock via :c:func:`!rcu_read_unlock`.
+
+In cases when the user already holds mmap read lock, :c:func:`!vma_start_read_locked`
+and :c:func:`!vma_start_read_locked_nested` can be used. These functions do not
+fail due to lock contention but the caller should still check their return values
+in case they fail for other reasons.
+
+VMA read locks increment :c:member:`!vma.vm_refcnt` reference counter for their
+duration and the caller of :c:func:`!lock_vma_under_rcu` must drop it via
+:c:func:`!vma_end_read`.
+
+VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a
+VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always
+acquired. An mmap write lock **must** be held for the duration of the VMA write
+lock, releasing or downgrading the mmap write lock also releases the VMA write
+lock so there is no :c:func:`!vma_end_write` function.
+
+Note that when write-locking a VMA lock, the :c:member:`!vma.vm_refcnt` is temporarily
+modified so that readers can detect the presense of a writer. The reference counter is
+restored once the vma sequence number used for serialisation is updated.
+
+This ensures the semantics we require - VMA write locks provide exclusive write
+access to the VMA.
+
+Implementation details
+^^^^^^^^^^^^^^^^^^^^^^
+
+The VMA lock mechanism is designed to be a lightweight means of avoiding the use
+of the heavily contended mmap lock. It is implemented using a combination of a
+reference counter and sequence numbers belonging to the containing
+:c:struct:`!struct mm_struct` and the VMA.
+
+Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic
+operation, i.e. it tries to acquire a read lock but returns false if it is
+unable to do so. At the end of the read operation, :c:func:`!vma_end_read` is
+called to release the VMA read lock.
+
+Invoking :c:func:`!vma_start_read` requires that :c:func:`!rcu_read_lock` has
+been called first, establishing that we are in an RCU critical section upon VMA
+read lock acquisition. Once acquired, the RCU lock can be released as it is only
+required for lookup. This is abstracted by :c:func:`!lock_vma_under_rcu` which
+is the interface a user should use.
+
+Writing requires the mmap to be write-locked and the VMA lock to be acquired via
+:c:func:`!vma_start_write`, however the write lock is released by the termination or
+downgrade of the mmap write lock so no :c:func:`!vma_end_write` is required.
+
+All this is achieved by the use of per-mm and per-VMA sequence counts, which are
+used in order to reduce complexity, especially for operations which write-lock
+multiple VMAs at once.
+
+If the mm sequence count, :c:member:`!mm->mm_lock_seq` is equal to the VMA
+sequence count :c:member:`!vma->vm_lock_seq` then the VMA is write-locked. If
+they differ, then it is not.
+
+Each time the mmap write lock is released in :c:func:`!mmap_write_unlock` or
+:c:func:`!mmap_write_downgrade`, :c:func:`!vma_end_write_all` is invoked which
+also increments :c:member:`!mm->mm_lock_seq` via
+:c:func:`!mm_lock_seqcount_end`.
+
+This way, we ensure that, regardless of the VMA's sequence number, a write lock
+is never incorrectly indicated and that when we release an mmap write lock we
+efficiently release **all** VMA write locks contained within the mmap at the
+same time.
+
+Since the mmap write lock is exclusive against others who hold it, the automatic
+release of any VMA locks on its release makes sense, as you would never want to
+keep VMAs locked across entirely separate write operations. It also maintains
+correct lock ordering.
+
+Each time a VMA read lock is acquired, we increment :c:member:`!vma.vm_refcnt`
+reference counter and check that the sequence count of the VMA does not match
+that of the mm.
+
+If it does, the read lock fails and :c:member:`!vma.vm_refcnt` is dropped.
+If it does not, we keep the reference counter raised, excluding writers, but
+permitting other readers, who can also obtain this lock under RCU.
+
+Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu`
+are also RCU safe, so the whole read lock operation is guaranteed to function
+correctly.
+
+On the write side, we set a bit in :c:member:`!vma.vm_refcnt` which can't be
+modified by readers and wait for all readers to drop their reference count.
+Once there are no readers, the VMA's sequence number is set to match that of
+the mm. During this entire operation mmap write lock is held.
+
+This way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep
+until these are finished and mutual exclusion is achieved.
+
+After setting the VMA's sequence number, the bit in :c:member:`!vma.vm_refcnt`
+indicating a writer is cleared. From this point on, VMA's sequence number will
+indicate VMA's write-locked state until mmap write lock is dropped or downgraded.
+
+This clever combination of a reference counter and sequence count allows for
+fast RCU-based per-VMA lock acquisition (especially on page fault, though
+utilised elsewhere) with minimal complexity around lock ordering.
+
+mmap write lock downgrading
+---------------------------
+
+When an mmap write lock is held one has exclusive access to resources within the
+mmap (with the usual caveats about requiring VMA write locks to avoid races with
+tasks holding VMA read locks).
+
+It is then possible to **downgrade** from a write lock to a read lock via
+:c:func:`!mmap_write_downgrade` which, similar to :c:func:`!mmap_write_unlock`,
+implicitly terminates all VMA write locks via :c:func:`!vma_end_write_all`, but
+importantly does not relinquish the mmap lock while downgrading, therefore
+keeping the locked virtual address space stable.
+
+An interesting consequence of this is that downgraded locks are exclusive
+against any other task possessing a downgraded lock (since a racing task would
+have to acquire a write lock first to downgrade it, and the downgraded lock
+prevents a new write lock from being obtained until the original lock is
+released).
+
+For clarity, we map read (R)/downgraded write (D)/write (W) locks against one
+another showing which locks exclude the others:
+
+.. list-table:: Lock exclusivity
+ :widths: 5 5 5 5
+ :header-rows: 1
+ :stub-columns: 1
+
+ * -
+ - R
+ - D
+ - W
+ * - R
+ - N
+ - N
+ - Y
+ * - D
+ - N
+ - Y
+ - Y
+ * - W
+ - Y
+ - Y
+ - Y
+
+Here a Y indicates the locks in the matching row/column are mutually exclusive,
+and N indicates that they are not.
+
+Stack expansion
+---------------
+
+Stack expansion throws up additional complexities in that we cannot permit there
+to be racing page faults, as a result we invoke :c:func:`!vma_start_write` to
+prevent this in :c:func:`!expand_downwards` or :c:func:`!expand_upwards`.
diff --git a/Documentation/mm/slub.rst b/Documentation/mm/slub.rst
index 60d350d08362..84ca1dc94e5e 100644
--- a/Documentation/mm/slub.rst
+++ b/Documentation/mm/slub.rst
@@ -175,6 +175,15 @@ can be influenced by kernel parameters:
``slab_max_order`` to 0, what cause minimum possible order of
slabs allocation.
+``slab_strict_numa``
+ Enables the application of memory policies on each
+ allocation. This results in more accurate placement of
+ objects which may result in the reduction of accesses
+ to remote nodes. The default is to only apply memory
+ policies at the folio level when a new folio is acquired
+ or a folio is retrieved from the lists. Enabling this
+ option reduces the fastpath performance of the slab allocator.
+
SLUB Debug output
=================
diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst
index e4f6972eb6c0..cc3cd46abd1b 100644
--- a/Documentation/mm/split_page_table_lock.rst
+++ b/Documentation/mm/split_page_table_lock.rst
@@ -4,7 +4,7 @@ Split page table lock
Originally, mm->page_table_lock spinlock protected all page tables of the
mm_struct. But this approach leads to poor page fault scalability of
-multi-threaded applications due high contention on the lock. To improve
+multi-threaded applications due to high contention on the lock. To improve
scalability, split page table lock was introduced.
With split page table lock we have separate per-table lock to serialize
@@ -16,9 +16,13 @@ There are helpers to lock/unlock a table and other accessor functions:
- pte_offset_map_lock()
maps PTE and takes PTE table lock, returns pointer to PTE with
pointer to its PTE table lock, or returns NULL if no PTE table;
- - pte_offset_map_nolock()
+ - pte_offset_map_ro_nolock()
maps PTE, returns pointer to PTE with pointer to its PTE table
lock (not taken), or returns NULL if no PTE table;
+ - pte_offset_map_rw_nolock()
+ maps PTE, returns pointer to PTE with pointer to its PTE table
+ lock (not taken) and the value of its pmd entry, or returns NULL
+ if no PTE table;
- pte_offset_map()
maps PTE, returns pointer to PTE, or returns NULL if no PTE table;
- pte_unmap()
@@ -58,7 +62,7 @@ Support of split page table lock by an architecture
===================================================
There's no need in special enabling of PTE split page table lock: everything
-required is done by pagetable_pte_ctor() and pagetable_pte_dtor(), which
+required is done by pagetable_pte_ctor() and pagetable_dtor(), which
must be called on PTE table allocation / freeing.
Make sure the architecture doesn't use slab allocator for page table
@@ -69,7 +73,7 @@ PMD split lock only makes sense if you have more than two page table
levels.
PMD split lock enabling requires pagetable_pmd_ctor() call on PMD table
-allocation and pagetable_pmd_dtor() on freeing.
+allocation and pagetable_dtor() on freeing.
Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and
pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing
diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst
index 93c9239b9ebe..0e7f8e4cd2e3 100644
--- a/Documentation/mm/transhuge.rst
+++ b/Documentation/mm/transhuge.rst
@@ -31,10 +31,10 @@ Design principles
feature that applies to all dynamic high order allocations in the
kernel)
-get_user_pages and follow_page
-==============================
+get_user_pages and pin_user_pages
+=================================
-get_user_pages and follow_page if run on a hugepage, will return the
+get_user_pages and pin_user_pages if run on a hugepage, will return the
head or tail pages as usual (exactly as they would do on
hugetlbfs). Most GUP users will only care about the actual physical
address of the page and its temporary pinning to release after the I/O
@@ -116,14 +116,27 @@ pages:
succeeds on tail pages.
- map/unmap of a PMD entry for the whole THP increment/decrement
- folio->_entire_mapcount and also increment/decrement
- folio->_nr_pages_mapped by ENTIRELY_MAPPED when _entire_mapcount
- goes from -1 to 0 or 0 to -1.
+ folio->_entire_mapcount and folio->_large_mapcount.
+
+ We also maintain the two slots for tracking MM owners (MM ID and
+ corresponding mapcount), and the current status ("maybe mapped shared" vs.
+ "mapped exclusively").
+
+ With CONFIG_PAGE_MAPCOUNT, we also increment/decrement
+ folio->_nr_pages_mapped by ENTIRELY_MAPPED when _entire_mapcount goes
+ from -1 to 0 or 0 to -1.
- map/unmap of individual pages with PTE entry increment/decrement
- page->_mapcount and also increment/decrement folio->_nr_pages_mapped
- when page->_mapcount goes from -1 to 0 or 0 to -1 as this counts
- the number of pages mapped by PTE.
+ folio->_large_mapcount.
+
+ We also maintain the two slots for tracking MM owners (MM ID and
+ corresponding mapcount), and the current status ("maybe mapped shared" vs.
+ "mapped exclusively").
+
+ With CONFIG_PAGE_MAPCOUNT, we also increment/decrement
+ page->_mapcount and increment/decrement folio->_nr_pages_mapped when
+ page->_mapcount goes from -1 to 0 or 0 to -1 as this counts the number
+ of pages mapped by PTE.
split_huge_page internally has to distribute the refcounts in the head
page to the tail pages before clearing all PG_head/tail bits from the page
@@ -151,8 +164,8 @@ clear where references should go after split: it will stay on the head page.
Note that split_huge_pmd() doesn't have any limitations on refcounting:
pmd can be split at any point and never fails.
-Partial unmap and deferred_split_folio()
-========================================
+Partial unmap and deferred_split_folio() (anon THP only)
+========================================================
Unmapping part of THP (with munmap() or other way) is not going to free
memory immediately. Instead, we detect that a subpage of THP is not in use
@@ -167,3 +180,13 @@ a THP crosses a VMA boundary.
The function deferred_split_folio() is used to queue a folio for splitting.
The splitting itself will happen when we get memory pressure via shrinker
interface.
+
+With CONFIG_PAGE_MAPCOUNT, we reliably detect partial mappings based on
+folio->_nr_pages_mapped.
+
+With CONFIG_NO_PAGE_MAPCOUNT, we detect partial mappings based on the
+average per-page mapcount in a THP: if the average is < 1, an anon THP is
+certainly partially mapped. As long as only a single process maps a THP,
+this detection is reliable. With long-running child processes, there can
+be scenarios where partial mappings can currently not be detected, and
+might need asynchronous detection during memory reclaim in the future.
diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
index b6a07a26b10d..8d11fe6a0854 100644
--- a/Documentation/mm/unevictable-lru.rst
+++ b/Documentation/mm/unevictable-lru.rst
@@ -80,7 +80,7 @@ on an additional LRU list for a few reasons:
(2) We want to be able to migrate unevictable folios between nodes for memory
defragmentation, workload management and memory hotplug. The Linux kernel
can only migrate folios that it can successfully isolate from the LRU
- lists (or "Movable" pages: outside of consideration here). If we were to
+ lists (or "Movable" folios: outside of consideration here). If we were to
maintain folios elsewhere than on an LRU-like list, where they can be
detected by folio_isolate_lru(), we would prevent their migration.
@@ -191,13 +191,13 @@ have become evictable again (via munlock() for example) and have been "rescued"
from the unevictable list. However, there may be situations where we decide,
for the sake of expediency, to leave an unevictable folio on one of the regular
active/inactive LRU lists for vmscan to deal with. vmscan checks for such
-folios in all of the shrink_{active|inactive|page}_list() functions and will
+folios in all of the shrink_{active|inactive|folio}_list() functions and will
"cull" such folios that it encounters: that is, it diverts those folios to the
unevictable list for the memory cgroup and node being scanned.
There may be situations where a folio is mapped into a VM_LOCKED VMA,
but the folio does not have the mlocked flag set. Such folios will make
-it all the way to shrink_active_list() or shrink_page_list() where they
+it all the way to shrink_active_list() or shrink_folio_list() where they
will be detected when vmscan walks the reverse map in folio_referenced()
or try_to_unmap(). The folio is culled to the unevictable list when it
is released by the shrinker.
@@ -230,7 +230,7 @@ In Nick's patch, he used one of the struct page LRU list link fields as a count
of VM_LOCKED VMAs that map the page (Rik van Riel had the same idea three years
earlier). But this use of the link field for a count prevented the management
of the pages on an LRU list, and thus mlocked pages were not migratable as
-isolate_lru_page() could not detect them, and the LRU list link field was not
+folio_isolate_lru() could not detect them, and the LRU list link field was not
available to the migration subsystem.
Nick resolved this by putting mlocked pages back on the LRU list before
@@ -253,8 +253,8 @@ Basic Management
mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
pages. When such a page has been "noticed" by the memory management subsystem,
-the page is marked with the PG_mlocked flag. This can be manipulated using the
-PageMlocked() functions.
+the folio is marked with the PG_mlocked flag. This can be manipulated using
+folio_set_mlocked() and folio_clear_mlocked() functions.
A PG_mlocked page will be placed on the unevictable list when it is added to
the LRU. Such pages can be "noticed" by memory management in several places:
@@ -269,7 +269,7 @@ the LRU. Such pages can be "noticed" by memory management in several places:
(4) in the fault path and when a VM_LOCKED stack segment is expanded; or
- (5) as mentioned above, in vmscan:shrink_page_list() when attempting to
+ (5) as mentioned above, in vmscan:shrink_folio_list() when attempting to
reclaim a page in a VM_LOCKED VMA by folio_referenced() or try_to_unmap().
mlocked pages become unlocked and rescued from the unevictable list when:
@@ -548,12 +548,12 @@ Some examples of these unevictable pages on the LRU lists are:
(3) pages still mapped into VM_LOCKED VMAs, which should be marked mlocked,
but events left mlock_count too low, so they were munlocked too early.
-vmscan's shrink_inactive_list() and shrink_page_list() also divert obviously
+vmscan's shrink_inactive_list() and shrink_folio_list() also divert obviously
unevictable pages found on the inactive lists to the appropriate memory cgroup
and node unevictable list.
rmap's folio_referenced_one(), called via vmscan's shrink_active_list() or
-shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_list(),
+shrink_folio_list(), and rmap's try_to_unmap_one() called via shrink_folio_list(),
check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_folio()
to correct them. Such pages are culled to the unevictable list when released
by the shrinker.
diff --git a/Documentation/mm/vmalloced-kernel-stacks.rst b/Documentation/mm/vmalloced-kernel-stacks.rst
index fc8c67833af6..5bc0f7ceea89 100644
--- a/Documentation/mm/vmalloced-kernel-stacks.rst
+++ b/Documentation/mm/vmalloced-kernel-stacks.rst
@@ -22,7 +22,7 @@ Kernel stack overflows are often hard to debug and make the kernel
susceptible to exploits. Problems could show up at a later time making
it difficult to isolate and root-cause.
-Virtually-mapped kernel stacks with guard pages causes kernel stack
+Virtually mapped kernel stacks with guard pages cause kernel stack
overflows to be caught immediately rather than causing difficult to
diagnose corruptions.
@@ -57,7 +57,7 @@ enable this bool configuration option. The requirements are:
VMAP_STACK
----------
-VMAP_STACK bool configuration option when enabled allocates virtually
+When enabled, the VMAP_STACK bool configuration option allocates virtually
mapped task stacks. This option depends on HAVE_ARCH_VMAP_STACK.
- Enable this if you want the use virtually-mapped kernel stacks
@@ -83,7 +83,7 @@ the latest code base:
Allocation
-----------
-When a new kernel thread is created, thread stack is allocated from
+When a new kernel thread is created, a thread stack is allocated from
virtually contiguous memory pages from the page level allocator. These
pages are mapped into contiguous kernel virtual space with PAGE_KERNEL
protections.
@@ -103,14 +103,14 @@ with PAGE_KERNEL protections.
- This does not address interrupt stacks - according to the original patch
Thread stack allocation is initiated from clone(), fork(), vfork(),
-kernel_thread() via kernel_clone(). Leaving a few hints for searching
-the code base to understand when and how thread stack is allocated.
+kernel_thread() via kernel_clone(). These are a few hints for searching
+the code base to understand when and how a thread stack is allocated.
Bulk of the code is in:
`kernel/fork.c <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/fork.c>`.
stack_vm_area pointer in task_struct keeps track of the virtually allocated
-stack and a non-null stack_vm_area pointer serves as a indication that the
+stack and a non-null stack_vm_area pointer serves as an indication that the
virtually mapped kernel stacks are enabled.
::
@@ -120,8 +120,8 @@ virtually mapped kernel stacks are enabled.
Stack overflow handling
-----------------------
-Leading and trailing guard pages help detect stack overflows. When stack
-overflows into the guard pages, handlers have to be careful not overflow
+Leading and trailing guard pages help detect stack overflows. When the stack
+overflows into the guard pages, handlers have to be careful not to overflow
the stack again. When handlers are called, it is likely that very little
stack space is left.
@@ -148,6 +148,6 @@ Conclusions
- THREAD_INFO_IN_TASK gets rid of arch-specific thread_info entirely and
simply embed the thread_info (containing only flags) and 'int cpu' into
task_struct.
-- The thread stack can be free'ed as soon as the task is dead (without
+- The thread stack can be freed as soon as the task is dead (without
waiting for RCU) and then, if vmapped stacks are in use, cache the
entire stack for reuse on the same cpu.
diff --git a/Documentation/mm/vmemmap_dedup.rst b/Documentation/mm/vmemmap_dedup.rst
index 593ede6d314b..b4a55b6569fa 100644
--- a/Documentation/mm/vmemmap_dedup.rst
+++ b/Documentation/mm/vmemmap_dedup.rst
@@ -180,27 +180,7 @@ this correctly. There is only **one** head ``struct page``, the tail
``struct page`` with ``PG_head`` are fake head ``struct page``. We need an
approach to distinguish between those two different types of ``struct page`` so
that ``compound_head()`` can return the real head ``struct page`` when the
-parameter is the tail ``struct page`` but with ``PG_head``. The following code
-snippet describes how to distinguish between real and fake head ``struct page``.
-
-.. code-block:: c
-
- if (test_bit(PG_head, &page->flags)) {
- unsigned long head = READ_ONCE(page[1].compound_head);
-
- if (head & 1) {
- if (head == (unsigned long)page + 1)
- /* head struct page */
- else
- /* tail struct page */
- } else {
- /* head struct page */
- }
- }
-
-We can safely access the field of the **page[1]** with ``PG_head`` because the
-page is a compound page composed with at least two contiguous pages.
-The implementation refers to ``page_fixed_fake_head()``.
+parameter is the tail ``struct page`` but with ``PG_head``.
Device DAX
==========
diff --git a/Documentation/mm/z3fold.rst b/Documentation/mm/z3fold.rst
deleted file mode 100644
index 25b5935d06c7..000000000000
--- a/Documentation/mm/z3fold.rst
+++ /dev/null
@@ -1,28 +0,0 @@
-======
-z3fold
-======
-
-z3fold is a special purpose allocator for storing compressed pages.
-It is designed to store up to three compressed pages per physical page.
-It is a zbud derivative which allows for higher compression
-ratio keeping the simplicity and determinism of its predecessor.
-
-The main differences between z3fold and zbud are:
-
-* unlike zbud, z3fold allows for up to PAGE_SIZE allocations
-* z3fold can hold up to 3 compressed pages in its page
-* z3fold doesn't export any API itself and is thus intended to be used
- via the zpool API.
-
-To keep the determinism and simplicity, z3fold, just like zbud, always
-stores an integral number of compressed pages per page, but it can store
-up to 3 pages unlike zbud which can store at most 2. Therefore the
-compression ratio goes to around 2.7x while zbud's one is around 1.7x.
-
-Unlike zbud (but like zsmalloc for that matter) z3fold_alloc() does not
-return a dereferenceable pointer. Instead, it returns an unsigned long
-handle which encodes actual location of the allocated object.
-
-Keeping effective compression ratio close to zsmalloc's, z3fold doesn't
-depend on MMU enabled and provides more predictable reclaim behavior
-which makes it a better fit for small and response-critical systems.
diff --git a/Documentation/mm/zsmalloc.rst b/Documentation/mm/zsmalloc.rst
index 76902835e68e..d2bbecd78e14 100644
--- a/Documentation/mm/zsmalloc.rst
+++ b/Documentation/mm/zsmalloc.rst
@@ -27,9 +27,8 @@ Instead, it returns an opaque handle (unsigned long) which encodes actual
location of the allocated object. The reason for this indirection is that
zsmalloc does not keep zspages permanently mapped since that would cause
issues on 32-bit systems where the VA region for kernel space mappings
-is very small. So, before using the allocating memory, the object has to
-be mapped using zs_map_object() to get a usable pointer and subsequently
-unmapped using zs_unmap_object().
+is very small. So, using the allocated memory should be done through the
+proper handle-based APIs.
stat
====