From 8782fb61cc848364e1e1599d76d3c9dd58a1cc06 Mon Sep 17 00:00:00 2001 From: Steven Price Date: Fri, 2 Sep 2022 12:26:12 +0100 Subject: mm: pagewalk: Fix race between unmap and page walker MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The mmap lock protects the page walker from changes to the page tables during the walk. However a read lock is insufficient to protect those areas which don't have a VMA as munmap() detaches the VMAs before downgrading to a read lock and actually tearing down PTEs/page tables. For users of walk_page_range() the solution is to simply call pte_hole() immediately without checking the actual page tables when a VMA is not present. We now never call __walk_page_range() without a valid vma. For walk_page_range_novma() the locking requirements are tightened to require the mmap write lock to be taken, and then walking the pgd directly with 'no_vma' set. This in turn means that all page walkers either have a valid vma, or it's that special 'novma' case for page table debugging. As a result, all the odd '(!walk->vma && !walk->no_vma)' tests can be removed. Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap") Reported-by: Jann Horn Signed-off-by: Steven Price Cc: Vlastimil Babka Cc: Thomas Hellström Cc: Konstantin Khlebnikov Cc: Andrew Morton Signed-off-by: Linus Torvalds --- mm/ptdump.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'mm/ptdump.c') diff --git a/mm/ptdump.c b/mm/ptdump.c index eea3d28d173c..8adab455a68b 100644 --- a/mm/ptdump.c +++ b/mm/ptdump.c @@ -152,13 +152,13 @@ void ptdump_walk_pgd(struct ptdump_state *st, struct mm_struct *mm, pgd_t *pgd) { const struct ptdump_range *range = st->range; - mmap_read_lock(mm); + mmap_write_lock(mm); while (range->start != range->end) { walk_page_range_novma(mm, range->start, range->end, &ptdump_ops, pgd, st); range++; } - mmap_read_unlock(mm); + mmap_write_unlock(mm); /* Flush out the last page */ st->note_page(st, 0, -1, 0); -- cgit