summaryrefslogtreecommitdiff
path: root/scripts/bpf_doc.py
diff options
context:
space:
mode:
authorzhongjinji <zhongjinji@honor.com>2025-09-16 00:29:46 +0800
committerAndrew Morton <akpm@linux-foundation.org>2025-09-21 14:22:35 -0700
commit5e1953dc71af01fae3d6786e073892ef3eebc3d8 (patch)
tree34d2915eb33a1b2882e0695af2395745b780054a /scripts/bpf_doc.py
parent59d4d36158ba3cdbce141d8e9261eea154d4c441 (diff)
mm/oom_kill: the OOM reaper traverses the VMA maple tree in reverse order
Although the oom_reaper is delayed and it gives the oom victim chance to clean up its address space this might take a while especially for processes with a large address space footprint. In those cases oom_reaper might start racing with the dying task and compete for shared resources - e.g. page table lock contention has been observed. Reduce those races by reaping the oom victim from the other end of the address space. It is also a significant improvement for process_mrelease(). When a process is killed, process_mrelease is used to reap the killed process and often runs concurrently with the dying task. The test data shows that after applying the patch, lock contention is greatly reduced during the procedure of reaping the killed process. The test is conducted on arm64. The following basic perf numbers show that applying this patch significantly reduces pte spin lock contention. Without the patch: |--99.57%-- oom_reaper | |--73.58%-- unmap_page_range | | |--8.67%-- [hit in function] | | |--41.59%-- __pte_offset_map_lock | | |--29.47%-- folio_remove_rmap_ptes | | |--16.11%-- tlb_flush_mmu | |--19.94%-- tlb_finish_mmu | |--3.21%-- folio_remove_rmap_ptes With the patch: |--99.53%-- oom_reaper | |--55.77%-- unmap_page_range | | |--20.49%-- [hit in function] | | |--58.30%-- folio_remove_rmap_ptes | | |--11.48%-- tlb_flush_mmu | | |--3.33%-- folio_mark_accessed | |--32.21%-- tlb_finish_mmu | |--6.93%-- folio_remove_rmap_ptes | |--0.69%-- __pte_offset_map_lock Detailed breakdowns for both scenarios are provided below. The cumulative time for oom_reaper plus exit_mmap(victim) in both cases is also summarized, making the performance improvements clear. +----------------------------------------------------------------+ | Category | Applying patch | Without patch | +-------------------------------+----------------+---------------+ | Total running time | 132.6 | 167.1 | | (exit_mmap + reaper work) | 72.4 + 60.2 | 90.7 + 76.4 | +-------------------------------+----------------+---------------+ | Time waiting for pte spinlock | 1.0 | 33.1 | | (exit_mmap + reaper work) | 0.4 + 0.6 | 10.0 + 23.1 | +-------------------------------+----------------+---------------+ | folio_remove_rmap_ptes time | 42.0 | 41.3 | | (exit_mmap + reaper work) | 18.4 + 23.6 | 22.4 + 18.9 | +----------------------------------------------------------------+ From this report, we can see that: 1. The reduction in total time comes mainly from the decrease in time spent on pte spinlock and other locks. 2. oom_reaper performs more work in some areas, but at the same time, exit_mmap also handles certain tasks more efficiently, such as folio_remove_rmap_ptes. Here is a more detailed perf report. [1] Link: https://lkml.kernel.org/r/20250915162946.5515-3-zhongjinji@honor.com Link: https://lore.kernel.org/all/20250915162619.5133-1-zhongjinji@honor.com/ [1] Signed-off-by: zhongjinji <zhongjinji@honor.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Suren Baghdasaryan <surenb@google.com> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Michal Hocko <mhocko@suse.com> Cc: David Rientjes <rientjes@google.com> Cc: Len Brown <lenb@kernel.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'scripts/bpf_doc.py')
0 files changed, 0 insertions, 0 deletions