summaryrefslogtreecommitdiff
path: root/mm/zsmalloc.c
diff options
context:
space:
mode:
authorSergey Senozhatsky <senozhatsky@chromium.org>2023-03-04 12:48:32 +0900
committerAndrew Morton <akpm@linux-foundation.org>2023-03-28 16:20:12 -0700
commita40a71e8343e281fedce9747ac1972c5556a982b (patch)
tree94598ea6adb2dfe094ce715360727103e17ed4b0 /mm/zsmalloc.c
parent3ccefdea226ba3f3b69f9e868d2b1c9995b56615 (diff)
zsmalloc: remove insert_zspage() ->inuse optimization
Patch series "zsmalloc: fine-grained fullness and new compaction algorithm", v4. Existing zsmalloc page fullness grouping leads to suboptimal page selection for both zs_malloc() and zs_compact(). This patchset reworks zsmalloc fullness grouping/classification. Additinally it also implements new compaction algorithm that is expected to use less CPU-cycles (as it potentially does fewer memcpy-s in zs_object_copy()). Test (synthetic) results can be seen in patch 0003. This patch (of 4): This optimization has no effect. It only ensures that when a zspage was added to its corresponding fullness list, its "inuse" counter was higher or lower than the "inuse" counter of the zspage at the head of the list. The intention was to keep busy zspages at the head, so they could be filled up and moved to the ZS_FULL fullness group more quickly. However, this doesn't work as the "inuse" counter of a zspage can be modified by obj_free() but the zspage may still belong to the same fullness list. So, fix_fullness_group() won't change the zspage's position in relation to the head's "inuse" counter, leading to a largely random order of zspages within the fullness list. For instance, consider a printout of the "inuse" counters of the first 10 zspages in a class that holds 93 objects per zspage: ZS_ALMOST_EMPTY: 36 67 68 64 35 54 63 52 As we can see the zspage with the lowest "inuse" counter is actually the head of the fullness list. Remove this pointless "optimisation". Link: https://lkml.kernel.org/r/20230304034835.2082479-1-senozhatsky@chromium.org Link: https://lkml.kernel.org/r/20230304034835.2082479-2-senozhatsky@chromium.org Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/zsmalloc.c')
-rw-r--r--mm/zsmalloc.c13
1 files changed, 1 insertions, 12 deletions
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3aed46ab7e6c..abe0c4d7942d 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -762,19 +762,8 @@ static void insert_zspage(struct size_class *class,
struct zspage *zspage,
enum fullness_group fullness)
{
- struct zspage *head;
-
class_stat_inc(class, fullness, 1);
- head = list_first_entry_or_null(&class->fullness_list[fullness],
- struct zspage, list);
- /*
- * We want to see more ZS_FULL pages and less almost empty/full.
- * Put pages with higher ->inuse first.
- */
- if (head && get_zspage_inuse(zspage) < get_zspage_inuse(head))
- list_add(&zspage->list, &head->list);
- else
- list_add(&zspage->list, &class->fullness_list[fullness]);
+ list_add(&zspage->list, &class->fullness_list[fullness]);
}
/*