From 73a11c9658577d876d4c70730719feab3fa328b0 Mon Sep 17 00:00:00 2001 From: David Hildenbrand Date: Thu, 15 Oct 2020 20:07:46 -0700 Subject: mm/memory_hotplug: inline __offline_pages() into offline_pages() Patch series "mm/memory_hotplug: online_pages()/offline_pages() cleanups", v2. These are a bunch of cleanups for online_pages()/offline_pages() and related code, mostly getting rid of memory hole handling that is no longer necessary. There is only a single walk_system_ram_range() call left in offline_pages(), to make sure we don't have any memory holes. I had some of these patches lying around for a longer time but didn't have time to polish them. In addition, the last patch marks all pageblocks of memory to get onlined MIGRATE_ISOLATE, so pages that have just been exposed to the buddy cannot get allocated before onlining is complete. Once heavy lifting is done, the pageblocks are set to MIGRATE_MOVABLE, such that allocations are possible. I played with DIMMs and virtio-mem on x86-64 and didn't spot any surprises. I verified that the numer of isolated pageblocks is correctly handled when onlining/offlining. This patch (of 10): There is only a single user, offline_pages(). Let's inline, to make it look more similar to online_pages(). Signed-off-by: David Hildenbrand Signed-off-by: Andrew Morton Reviewed-by: Oscar Salvador Reviewed-by: Pankaj Gupta Acked-by: Michal Hocko Cc: Wei Yang Cc: Baoquan He Cc: Pankaj Gupta Cc: Charan Teja Reddy Cc: Dan Williams Cc: Fenghua Yu Cc: Logan Gunthorpe Cc: "Matthew Wilcox (Oracle)" Cc: Mel Gorman Cc: Michal Hocko Cc: Michel Lespinasse Cc: Mike Rapoport Cc: Tony Luck Cc: Mel Gorman Link: https://lkml.kernel.org/r/20200819175957.28465-1-david@redhat.com Link: https://lkml.kernel.org/r/20200819175957.28465-2-david@redhat.com Signed-off-by: Linus Torvalds --- mm/memory_hotplug.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) (limited to 'mm/memory_hotplug.c') diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 8e9e2d44cdad..ba201ab4c0e4 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1484,11 +1484,10 @@ static int count_system_ram_pages_cb(unsigned long start_pfn, return 0; } -static int __ref __offline_pages(unsigned long start_pfn, - unsigned long end_pfn) +int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) { - unsigned long pfn, nr_pages = 0; - unsigned long offlined_pages = 0; + const unsigned long end_pfn = start_pfn + nr_pages; + unsigned long pfn, system_ram_pages = 0, offlined_pages = 0; int ret, node, nr_isolate_pageblock; unsigned long flags; struct zone *zone; @@ -1505,9 +1504,9 @@ static int __ref __offline_pages(unsigned long start_pfn, * memory holes PG_reserved, don't need pfn_valid() checks, and can * avoid using walk_system_ram_range() later. */ - walk_system_ram_range(start_pfn, end_pfn - start_pfn, &nr_pages, + walk_system_ram_range(start_pfn, nr_pages, &system_ram_pages, count_system_ram_pages_cb); - if (nr_pages != end_pfn - start_pfn) { + if (system_ram_pages != nr_pages) { ret = -EINVAL; reason = "memory holes"; goto failed_removal; @@ -1656,11 +1655,6 @@ failed_removal: return ret; } -int offline_pages(unsigned long start_pfn, unsigned long nr_pages) -{ - return __offline_pages(start_pfn, start_pfn + nr_pages); -} - static int check_memblock_offlined_cb(struct memory_block *mem, void *arg) { int ret = !is_memblock_offlined(mem); -- cgit