summaryrefslogtreecommitdiff
path: root/Documentation/arm64/kasan-offsets.sh
diff options
context:
space:
mode:
authorArd Biesheuvel <ardb@kernel.org>2020-10-08 17:36:00 +0200
committerCatalin Marinas <catalin.marinas@arm.com>2020-11-09 17:15:37 +0000
commitf4693c2716b35d0846fd45a4ad7db78bfb25efc8 (patch)
tree724d37924270f740ef229f7152fd75de39d313e0 /Documentation/arm64/kasan-offsets.sh
parentf8394f232b1eab649ce2df5c5f15b0e528c92091 (diff)
arm64: mm: extend linear region for 52-bit VA configurations
For historical reasons, the arm64 kernel VA space is configured as two equally sized halves, i.e., on a 48-bit VA build, the VA space is split into a 47-bit vmalloc region and a 47-bit linear region. When support for 52-bit virtual addressing was added, this equal split was kept, resulting in a substantial waste of virtual address space in the linear region: 48-bit VA 52-bit VA 0xffff_ffff_ffff_ffff +-------------+ +-------------+ | vmalloc | | vmalloc | 0xffff_8000_0000_0000 +-------------+ _PAGE_END(48) +-------------+ | linear | : : 0xffff_0000_0000_0000 +-------------+ : : : : : : : : : : : : : : : : : currently : : unusable : : : : : : unused : : by : : : : : : : : hardware : : : : : : : 0xfff8_0000_0000_0000 : : _PAGE_END(52) +-------------+ : : | | : : | | : : | | : : | | : : | | : unusable : | | : : | linear | : by : | | : : | region | : hardware : | | : : | | : : | | : : | | : : | | : : | | : : | | 0xfff0_0000_0000_0000 +-------------+ PAGE_OFFSET +-------------+ As illustrated above, the 52-bit VA kernel uses 47 bits for the vmalloc space (as before), to ensure that a single 64k granule kernel image can support any 64k granule capable system, regardless of whether it supports the 52-bit virtual addressing extension. However, due to the fact that the VA space is still split in equal halves, the linear region is only 2^51 bytes in size, wasting almost half of the 52-bit VA space. Let's fix this, by abandoning the equal split, and simply assigning all VA space outside of the vmalloc region to the linear region. The KASAN shadow region is reconfigured so that it ends at the start of the vmalloc region, and grows downwards. That way, the arrangement of the vmalloc space (which contains kernel mappings, modules, BPF region, the vmemmap array etc) is identical between non-KASAN and KASAN builds, which aids debugging. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Steve Capper <steve.capper@arm.com> Link: https://lore.kernel.org/r/20201008153602.9467-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'Documentation/arm64/kasan-offsets.sh')
-rw-r--r--Documentation/arm64/kasan-offsets.sh3
1 files changed, 1 insertions, 2 deletions
diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh
index 2b7a021db363..2dc5f9e18039 100644
--- a/Documentation/arm64/kasan-offsets.sh
+++ b/Documentation/arm64/kasan-offsets.sh
@@ -1,12 +1,11 @@
#!/bin/sh
# Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW
-# start address at the mid-point of the kernel VA space
+# start address at the top of the linear region
print_kasan_offset () {
printf "%02d\t" $1
printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
- + (1 << ($1 - 32 - $2)) \
- (1 << (64 - 32 - $2)) ))
}