summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorChengming Zhou <zhouchengming@bytedance.com>2024-01-28 13:28:49 +0000
committerAndrew Morton <akpm@linux-foundation.org>2024-02-07 21:20:36 -0800
commit27d3969b47cc38810b3fd65d72231940e8671e6c (patch)
tree0bbd29258415e4f39bc0c80fe2256ff6582adb7e /mm
parent79d72c68c58784a3e1cd2378669d51bfd0cb7498 (diff)
mm/zswap: don't return LRU_SKIP if we have dropped lru lock
LRU_SKIP can only be returned if we don't ever dropped lru lock, or we need to return LRU_RETRY to restart from the head of lru list. Otherwise, the iteration might continue from a cursor position that was freed while the locks were dropped. Actually we may need to introduce another LRU_STOP to really terminate the ongoing shrinking scan process, when we encounter a warm page already in the swap cache. The current list_lru implementation doesn't have this function to early break from __list_lru_walk_one. Link: https://lkml.kernel.org/r/20240126-zswap-writeback-race-v2-1-b10479847099@bytedance.com Fixes: b5ba474f3f51 ("zswap: shrink zswap pool based on memory pressure") Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Nhat Pham <nphamcs@gmail.com> Cc: Chris Li <chriscli@google.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/zswap.c4
1 files changed, 1 insertions, 3 deletions
diff --git a/mm/zswap.c b/mm/zswap.c
index 0a94b197ed32..350dd2fc8159 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -895,10 +895,8 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
* into the warmer region. We should terminate shrinking (if we're in the dynamic
* shrinker context).
*/
- if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
- ret = LRU_SKIP;
+ if (writeback_result == -EEXIST && encountered_page_in_swapcache)
*encountered_page_in_swapcache = true;
- }
goto put_unlock;
}