From e9119fb65761f124b31743b598ce04b8f15a6fe3 Mon Sep 17 00:00:00 2001 From: Peter Xu Date: Thu, 23 Nov 2023 13:02:22 -0500 Subject: mm/gup: fix follow_devmap_p[mu]d() on page==NULL handling This is a bug found not by any report but only by code observations. When GUP sees a devpmd/devpud and if page==NULL is returned, it means a fault is probably required. Here falling through when page==NULL can cause unexpected behavior. Fix both cases by catching the page==NULL cases with no_page_table(). Link: https://lkml.kernel.org/r/20231123180222.1048297-1-peterx@redhat.com Fixes: 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings") Fixes: 080dbb618b4b ("mm/follow_page_mask: split follow_page_mask to smaller functions.") Signed-off-by: Peter Xu Acked-by: David Hildenbrand Cc: Dan Williams Cc: Mel Gorman Cc: Matthew Wilcox Cc: Aneesh Kumar K.V Cc: Christoph Hellwig Signed-off-by: Andrew Morton --- mm/gup.c | 2 ++ 1 file changed, 2 insertions(+) (limited to 'mm/gup.c') diff --git a/mm/gup.c b/mm/gup.c index 231711efa390..0a5f0e91bfec 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -710,6 +710,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, spin_unlock(ptl); if (page) return page; + return no_page_table(vma, flags); } if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); @@ -758,6 +759,7 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, spin_unlock(ptl); if (page) return page; + return no_page_table(vma, flags); } if (unlikely(pud_bad(*pud))) return no_page_table(vma, flags); -- cgit From e3b4b1374f87c71e9309efc6149f113cdd17af72 Mon Sep 17 00:00:00 2001 From: David Hildenbrand Date: Wed, 20 Dec 2023 23:45:02 +0100 Subject: mm: convert page_try_share_anon_rmap() to folio_try_share_anon_rmap_[pte|pmd]() Let's convert it like we converted all the other rmap functions. Don't introduce folio_try_share_anon_rmap_ptes() for now, as we don't have a user that wants rmap batching in sight. Pretty easy to add later. All users are easy to convert -- only ksm.c doesn't use folios yet but that is left for future work -- so let's just do it in a single shot. While at it, turn the BUG_ON into a WARN_ON_ONCE. Note that page_try_share_anon_rmap() so far didn't care about pte/pmd mappings (no compound parameter). We're changing that so we can perform better sanity checks and make the code actually more readable/consistent. For example, __folio_rmap_sanity_checks() will make sure that a PMD range actually falls completely into the folio. Link: https://lkml.kernel.org/r/20231220224504.646757-39-david@redhat.com Signed-off-by: David Hildenbrand Cc: Hugh Dickins Cc: Matthew Wilcox (Oracle) Cc: Muchun Song Cc: Muchun Song Cc: Peter Xu Cc: Ryan Roberts Cc: Yin Fengwei Signed-off-by: Andrew Morton --- mm/gup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'mm/gup.c') diff --git a/mm/gup.c b/mm/gup.c index 0a5f0e91bfec..df83182ec72d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -177,7 +177,7 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) /* * Adjust the pincount before re-checking the PTE for changes. * This is essentially a smp_mb() and is paired with a memory - * barrier in page_try_share_anon_rmap(). + * barrier in folio_try_share_anon_rmap_*(). */ smp_mb__after_atomic(); -- cgit