summaryrefslogtreecommitdiff
path: root/mm/Kconfig
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2023-08-29 13:04:15 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2023-08-29 13:04:15 -0700
commit651a00bc56403161351090a9d7ddbd7095975324 (patch)
tree921e3e06b419384d76b8ebd7c08b610dbe0d6b7b /mm/Kconfig
parent9d6b14cd1e993d2ff98df0cef6d935ce6fd4dbec (diff)
parent3d053e8060430b86bad0854b7c7f03f15be3a7e5 (diff)
Merge tag 'slab-for-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab
Pull slab updates from Vlastimil Babka: "This happens to be a small one (due to summer I guess), and all hardening related: - Randomized kmalloc caches, by GONG, Ruiqi. A new opt-in hardening feature to make heap spraying harder. It creates multiple (16) copies of kmalloc caches, reducing the chance of an attacker-controllable allocation site to land in the same slab as e.g. an allocation site with use-after-free vulnerability. The selection of the copy is derived from the allocation site address, including a per-boot random seed. - Stronger typing for hardened freelists in SLUB, by Jann Horn Introduces a custom type for hardened freelist entries instead of "void *" as those are not directly dereferencable. While reviewing this, I've noticed opportunities for further cleanups in that code and added those on top" * tag 'slab-for-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: Randomized slab caches for kmalloc() mm/slub: remove freelist_dereference() mm/slub: remove redundant kasan_reset_tag() from freelist_ptr calculations mm/slub: refactor freelist to use custom type
Diffstat (limited to 'mm/Kconfig')
-rw-r--r--mm/Kconfig17
1 files changed, 17 insertions, 0 deletions
diff --git a/mm/Kconfig b/mm/Kconfig
index 09130434e30d..4bf7dc5ae5ef 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -337,6 +337,23 @@ config SLUB_CPU_PARTIAL
which requires the taking of locks that may cause latency spikes.
Typically one would choose no for a realtime system.
+config RANDOM_KMALLOC_CACHES
+ default n
+ depends on SLUB && !SLUB_TINY
+ bool "Randomize slab caches for normal kmalloc"
+ help
+ A hardening feature that creates multiple copies of slab caches for
+ normal kmalloc allocation and makes kmalloc randomly pick one based
+ on code address, which makes the attackers more difficult to spray
+ vulnerable memory objects on the heap for the purpose of exploiting
+ memory vulnerabilities.
+
+ Currently the number of copies is set to 16, a reasonably large value
+ that effectively diverges the memory objects allocated for different
+ subsystems or modules into different caches, at the expense of a
+ limited degree of memory and CPU overhead that relates to hardware and
+ system workload.
+
endmenu # SLAB allocator options
config SHUFFLE_PAGE_ALLOCATOR