From 280ec6ccb6422aa4a04f9ac4216ddcf055acc95d Mon Sep 17 00:00:00 2001 From: Andrey Konovalov Date: Tue, 19 Dec 2023 23:28:45 +0100 Subject: kasan: rename kasan_slab_free_mempool to kasan_mempool_poison_object Patch series "kasan: save mempool stack traces". This series updates KASAN to save alloc and free stack traces for secondary-level allocators that cache and reuse allocations internally instead of giving them back to the underlying allocator (e.g. mempool). As a part of this change, introduce and document a set of KASAN hooks: bool kasan_mempool_poison_pages(struct page *page, unsigned int order); void kasan_mempool_unpoison_pages(struct page *page, unsigned int order); bool kasan_mempool_poison_object(void *ptr); void kasan_mempool_unpoison_object(void *ptr, size_t size); and use them in the mempool code. Besides mempool, skbuff and io_uring also cache allocations and already use KASAN hooks to poison those. Their code is updated to use the new mempool hooks. The new hooks save alloc and free stack traces (for normal kmalloc and slab objects; stack traces for large kmalloc objects and page_alloc are not supported by KASAN yet), improve the readability of the users' code, and also allow the users to prevent double-free and invalid-free bugs; see the patches for the details. This patch (of 21): Rename kasan_slab_free_mempool to kasan_mempool_poison_object. kasan_slab_free_mempool is a slightly confusing name: it is unclear whether this function poisons the object when it is freed into mempool or does something when the object is freed from mempool to the underlying allocator. The new name also aligns with other mempool-related KASAN hooks added in the following patches in this series. Link: https://lkml.kernel.org/r/cover.1703024586.git.andreyknvl@google.com Link: https://lkml.kernel.org/r/c5618685abb7cdbf9fb4897f565e7759f601da84.1703024586.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Cc: Alexander Lobakin Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Breno Leitao Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Marco Elver Signed-off-by: Andrew Morton --- io_uring/alloc_cache.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'io_uring') diff --git a/io_uring/alloc_cache.h b/io_uring/alloc_cache.h index 241245cb54a6..8de0414e8efe 100644 --- a/io_uring/alloc_cache.h +++ b/io_uring/alloc_cache.h @@ -16,8 +16,7 @@ static inline bool io_alloc_cache_put(struct io_alloc_cache *cache, if (cache->nr_cached < cache->max_cached) { cache->nr_cached++; wq_stack_add_head(&entry->node, &cache->list); - /* KASAN poisons object */ - kasan_slab_free_mempool(entry); + kasan_mempool_poison_object(entry); return true; } return false; -- cgit From 8ab3b09755d926afc3bdd2fadff7f159310440c2 Mon Sep 17 00:00:00 2001 From: Andrey Konovalov Date: Tue, 19 Dec 2023 23:29:05 +0100 Subject: io_uring: use mempool KASAN hook Use the proper kasan_mempool_unpoison_object hook for unpoisoning cached objects. A future change might also update io_uring to check the return value of kasan_mempool_poison_object to prevent double-free and invalid-free bugs. This proves to be non-trivial with the current way io_uring caches objects, so this is left out-of-scope of this series. Link: https://lkml.kernel.org/r/eca18d6cbf676ed784f1a1f209c386808a8087c5.1703024586.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Cc: Alexander Lobakin Cc: Alexander Potapenko Cc: Andrey Ryabinin Cc: Breno Leitao Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Marco Elver Signed-off-by: Andrew Morton --- io_uring/alloc_cache.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'io_uring') diff --git a/io_uring/alloc_cache.h b/io_uring/alloc_cache.h index 8de0414e8efe..bf2fb26a6539 100644 --- a/io_uring/alloc_cache.h +++ b/io_uring/alloc_cache.h @@ -33,7 +33,7 @@ static inline struct io_cache_entry *io_alloc_cache_get(struct io_alloc_cache *c struct io_cache_entry *entry; entry = container_of(cache->list.next, struct io_cache_entry, node); - kasan_unpoison_range(entry, cache->elem_size); + kasan_mempool_unpoison_object(entry, cache->elem_size); cache->list.next = cache->list.next->next; cache->nr_cached--; return entry; -- cgit