summaryrefslogtreecommitdiff
path: root/mm/internal.h
diff options
context:
space:
mode:
authorHugh Dickins <hughd@google.com>2022-03-02 17:35:30 -0800
committerMatthew Wilcox (Oracle) <willy@infradead.org>2022-03-03 12:47:07 -0500
commitc8263bd605009355edf781f2dd711de633998475 (patch)
tree1f42b6f88f6932e834bbbcfd4832be674506b3ef /mm/internal.h
parent47d4f3eeef5f7fd346640fa8b49a942b506d2659 (diff)
mm/munlock: mlock_vma_page() check against VM_SPECIAL
Although mmap_region() and mlock_fixup() take care that VM_LOCKED is never left set on a VM_SPECIAL vma, there is an interval while file->f_op->mmap() is using vm_insert_page(s), when VM_LOCKED may still be set while VM_SPECIAL bits are added: so mlock_vma_page() should ignore VM_LOCKED while any VM_SPECIAL bits are set. This showed up as a "Bad page" still mlocked, when vfree()ing pages which had been vm_inserted by remap_vmalloc_range_partial(): while release_pages() and __page_cache_release(), and so put_page(), catch pages still mlocked when freeing (and clear_page_mlock() caught them when unmapping), the vfree() path is unprepared for them: fix it? but these pages should not have been mlocked in the first place. I assume that an mlockall(MCL_FUTURE) had been done in the past; or maybe the user got to specify MAP_LOCKED on a vmalloc'ing driver mmap. Signed-off-by: Hugh Dickins <hughd@google.com> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Diffstat (limited to 'mm/internal.h')
-rw-r--r--mm/internal.h11
1 files changed, 9 insertions, 2 deletions
diff --git a/mm/internal.h b/mm/internal.h
index 18af980bb1b8..450a2c8a43f3 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -413,8 +413,15 @@ void mlock_page(struct page *page);
static inline void mlock_vma_page(struct page *page,
struct vm_area_struct *vma, bool compound)
{
- /* VM_IO check prevents migration from double-counting during mlock */
- if (unlikely((vma->vm_flags & (VM_LOCKED|VM_IO)) == VM_LOCKED) &&
+ /*
+ * The VM_SPECIAL check here serves two purposes.
+ * 1) VM_IO check prevents migration from double-counting during mlock.
+ * 2) Although mmap_region() and mlock_fixup() take care that VM_LOCKED
+ * is never left set on a VM_SPECIAL vma, there is an interval while
+ * file->f_op->mmap() is using vm_insert_page(s), when VM_LOCKED may
+ * still be set while VM_SPECIAL bits are added: so ignore it then.
+ */
+ if (unlikely((vma->vm_flags & (VM_LOCKED|VM_SPECIAL)) == VM_LOCKED) &&
(compound || !PageTransCompound(page)))
mlock_page(page);
}