summaryrefslogtreecommitdiff
path: root/arch/powerpc/mm/mmu_decl.h
diff options
context:
space:
mode:
authorJason Yan <yanaijie@huawei.com>2019-09-20 17:45:41 +0800
committerMichael Ellerman <mpe@ellerman.id.au>2019-11-13 19:27:41 +1100
commit6a38ea1d7b94c6c84dbf3f5c969be5e3648d9a70 (patch)
treedce9f73350ac9d63614f4f306262084523ecc337 /arch/powerpc/mm/mmu_decl.h
parent2b0e86cc5de6dabadc2d64cefa429fc227c8a756 (diff)
powerpc/fsl_booke/32: randomize the kernel image offset
After we have the basic support of relocate the kernel in some appropriate place, we can start to randomize the offset now. Entropy is derived from the banner and timer, which will change every build and boot. This not so much safe so additionally the bootloader may pass entropy via the /chosen/kaslr-seed node in device tree. We will use the first 512M of the low memory to randomize the kernel image. The memory will be split in 64M zones. We will use the lower 8 bit of the entropy to decide the index of the 64M zone. Then we chose a 16K aligned offset inside the 64M zone to put the kernel in. We also check if we will overlap with some areas like the dtb area, the initrd area or the crashkernel area. If we cannot find a proper area, kaslr will be disabled and boot from the original kernel. Some pieces of code are derived from arch/x86/boot/compressed/kaslr.c or arch/arm64/kernel/kaslr.c such as rotate_xor(). Credit goes to Kees and Ard. Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/mm/mmu_decl.h')
0 files changed, 0 insertions, 0 deletions