summaryrefslogtreecommitdiff
path: root/Documentation/vm
diff options
context:
space:
mode:
authorMike Rapoport <rppt@linux.vnet.ibm.com>2018-04-24 09:40:25 +0300
committerJonathan Corbet <corbet@lwn.net>2018-04-27 17:19:37 -0600
commit6570c785ea8fdb3c6e8f7591d25d33fd519f928b (patch)
tree086f27221fe6ea04129d44328816758867d67611 /Documentation/vm
parent064fca37bc0545fec0b5abdf9ce09136b73d7083 (diff)
docs/vm: ksm: reshuffle text between "sysfs" and "design" sections
The description of "max_page_sharing" sysfs attribute includes lots of implementation details that more naturally belong in the "Design" section. Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: Jonathan Corbet <corbet@lwn.net>
Diffstat (limited to 'Documentation/vm')
-rw-r--r--Documentation/vm/ksm.rst51
1 files changed, 30 insertions, 21 deletions
diff --git a/Documentation/vm/ksm.rst b/Documentation/vm/ksm.rst
index 0e5a085694e5..00961b8ab03e 100644
--- a/Documentation/vm/ksm.rst
+++ b/Documentation/vm/ksm.rst
@@ -133,31 +133,21 @@ use_zero_pages
max_page_sharing
Maximum sharing allowed for each KSM page. This enforces a
- deduplication limit to avoid the virtual memory rmap lists to
- grow too large. The minimum value is 2 as a newly created KSM
- page will have at least two sharers. The rmap walk has O(N)
- complexity where N is the number of rmap_items (i.e. virtual
- mappings) that are sharing the page, which is in turn capped
- by ``max_page_sharing``. So this effectively spreads the linear
- O(N) computational complexity from rmap walk context over
- different KSM pages. The ksmd walk over the stable_node
- "chains" is also O(N), but N is the number of stable_node
- "dups", not the number of rmap_items, so it has not a
- significant impact on ksmd performance. In practice the best
- stable_node "dup" candidate will be kept and found at the head
- of the "dups" list. The higher this value the faster KSM will
- merge the memory (because there will be fewer stable_node dups
- queued into the stable_node chain->hlist to check for pruning)
- and the higher the deduplication factor will be, but the
- slowest the worst case rmap walk could be for any given KSM
- page. Slowing down the rmap_walk means there will be higher
+ deduplication limit to avoid high latency for virtual memory
+ operations that involve traversal of the virtual mappings that
+ share the KSM page. The minimum value is 2 as a newly created
+ KSM page will have at least two sharers. The higher this value
+ the faster KSM will merge the memory and the higher the
+ deduplication factor will be, but the slower the worst case
+ virtual mappings traversal could be for any given KSM
+ page. Slowing down this traversal means there will be higher
latency for certain virtual memory operations happening during
swapping, compaction, NUMA balancing and page migration, in
turn decreasing responsiveness for the caller of those virtual
memory operations. The scheduler latency of other tasks not
- involved with the VM operations doing the rmap walk is not
- affected by this parameter as the rmap walks are always
- schedule friendly themselves.
+ involved with the VM operations doing the virtual mappings
+ traversal is not affected by this parameter as these
+ traversals are always schedule friendly themselves.
stable_node_chains_prune_millisecs
How frequently to walk the whole list of stable_node "dups"
@@ -240,6 +230,25 @@ if compared to an unlimited list of reverse mappings. It is still
enforced that there cannot be KSM page content duplicates in the
stable tree itself.
+The deduplication limit enforced by ``max_page_sharing`` is required
+to avoid the virtual memory rmap lists to grow too large. The rmap
+walk has O(N) complexity where N is the number of rmap_items
+(i.e. virtual mappings) that are sharing the page, which is in turn
+capped by ``max_page_sharing``. So this effectively spreads the linear
+O(N) computational complexity from rmap walk context over different
+KSM pages. The ksmd walk over the stable_node "chains" is also O(N),
+but N is the number of stable_node "dups", not the number of
+rmap_items, so it has not a significant impact on ksmd performance. In
+practice the best stable_node "dup" candidate will be kept and found
+at the head of the "dups" list.
+
+High values of ``max_page_sharing`` result in faster memory merging
+(because there will be fewer stable_node dups queued into the
+stable_node chain->hlist to check for pruning) and higher
+deduplication factor at the expense of slower worst case for rmap
+walks for any KSM page which can happen during swapping, compaction,
+NUMA balancing and page migration.
+
Reference
---------
.. kernel-doc:: mm/ksm.c