summaryrefslogtreecommitdiff
path: root/mm/vmscan.c
diff options
context:
space:
mode:
authorHuang Ying <ying.huang@intel.com>2017-09-06 16:22:52 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2017-09-06 17:27:28 -0700
commitfe490cc0fe9e6ee48cc48bb5dc463bc5f0f1428f (patch)
tree4a7e76a5404c22bc39b5df4dcc20a070c47f129b /mm/vmscan.c
parentbd4c82c22c367e068acb1ec9ec02be2fac3e09e2 (diff)
mm, THP, swap: add THP swapping out fallback counting
When swapping out THP (Transparent Huge Page), instead of swapping out the THP as a whole, sometimes we have to fallback to split the THP into normal pages before swapping, because no free swap clusters are available, or cgroup limit is exceeded, etc. To count the number of the fallback, a new VM event THP_SWPOUT_FALLBACK is added, and counted when we fallback to split the THP. Link: http://lkml.kernel.org/r/20170724051840.2309-13-ying.huang@intel.com Signed-off-by: "Huang, Ying" <ying.huang@intel.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Shaohua Li <shli@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Ross Zwisler <ross.zwisler@intel.com> [for brd.c, zram_drv.c, pmem.c] Cc: Vishal L Verma <vishal.l.verma@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r--mm/vmscan.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6fbf707c0ce2..13d711dd8776 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1154,6 +1154,9 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (split_huge_page_to_list(page,
page_list))
goto activate_locked;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ count_vm_event(THP_SWPOUT_FALLBACK);
+#endif
if (!add_to_swap(page))
goto activate_locked;
}