summaryrefslogtreecommitdiff
path: root/mm/rmap.c
diff options
context:
space:
mode:
authorDavid Hildenbrand <david@redhat.com>2023-09-13 14:51:08 +0200
committerAndrew Morton <akpm@linux-foundation.org>2023-10-04 10:32:27 -0700
commitfd63908706f79c963946a77b7f352db5431deed5 (patch)
treea0519b195b59af33cd23ca2fc863516975b86ac0 /mm/rmap.c
parent811244a501b967b00fecb1ae906d5dc6329c91e0 (diff)
mm/rmap: drop stale comment in page_add_anon_rmap and hugepage_add_anon_rmap()
Patch series "Anon rmap cleanups". Some cleanups around rmap for anon pages. I'm working on more cleanups also around file rmap -- also to handle the "compound" parameter internally only and to let hugetlb use page_add_file_rmap(), but these changes make sense separately. This patch (of 6): That comment was added in commit 5dbe0af47f8a ("mm: fix kernel BUG at mm/rmap.c:1017!") to document why we can see vma->vm_end getting adjusted concurrently due to a VMA split. However, the optimized locking code was changed again in bf181b9f9d8 ("mm anon rmap: replace same_anon_vma linked list with an interval tree."). ... and later, the comment was changed in commit 0503ea8f5ba7 ("mm/mmap: remove __vma_adjust()") to talk about "vma_merge" although the original issue was with VMA splitting. Let's just remove that comment. Nowadays, it's outdated, imprecise and confusing. Link: https://lkml.kernel.org/r/20230913125113.313322-1-david@redhat.com Link: https://lkml.kernel.org/r/20230913125113.313322-2-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/rmap.c')
-rw-r--r--mm/rmap.c2
1 files changed, 0 insertions, 2 deletions
diff --git a/mm/rmap.c b/mm/rmap.c
index 9f795b93cf40..61ad50b9ed9f 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1245,7 +1245,6 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
if (likely(!folio_test_ksm(folio))) {
- /* address might be in next vma when migration races vma_merge */
if (first)
__page_set_anon_rmap(folio, page, vma, address,
!!(flags & RMAP_EXCLUSIVE));
@@ -2549,7 +2548,6 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
BUG_ON(!folio_test_locked(folio));
BUG_ON(!anon_vma);
- /* address might be in next vma when migration races vma_merge */
first = atomic_inc_and_test(&folio->_entire_mapcount);
VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page);
VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page);