summaryrefslogtreecommitdiff
path: root/mm/percpu-stats.c
diff options
context:
space:
mode:
authorJohannes Weiner <hannes@cmpxchg.org>2023-08-24 11:38:21 -0400
committerAndrew Morton <akpm@linux-foundation.org>2023-09-02 15:17:34 -0700
commitf945116e4e191cd543ecd56d9f13e6331494847c (patch)
tree465a7a4b332a113aa5539f6abf58f6baca37aac2 /mm/percpu-stats.c
parent12af80f6c9f2daf07bc3125605dc2e454db321a5 (diff)
mm: page_alloc: remove stale CMA guard code
In the past, movable allocations could be disallowed from CMA through PF_MEMALLOC_PIN. As CMA pages are funneled through the MOVABLE pcplist, this required filtering that cornercase during allocations, such that pinnable allocations wouldn't accidentally get a CMA page. However, since 8e3560d963d2 ("mm: honor PF_MEMALLOC_PIN for all movable pages"), PF_MEMALLOC_PIN automatically excludes __GFP_MOVABLE. Once again, MOVABLE implies CMA is allowed. Remove the stale filtering code. Also remove a stale comment that was introduced as part of the filtering code, because the filtering let order-0 pages fall through to the buddy allocator. See 1d91df85f399 ("mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs") for context. The comment's been obsolete since the introduction of the explicit ALLOC_HIGHATOMIC flag in eb2e2b425c69 ("mm/page_alloc: explicitly record high-order atomic allocations in alloc_flags"). Link: https://lkml.kernel.org/r/20230824153821.243148-1-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: David Hildenbrand <david@redhat.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/percpu-stats.c')
0 files changed, 0 insertions, 0 deletions