summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/pointer_auth.h
diff options
context:
space:
mode:
authorAmit Daniel Kachhap <amit.kachhap@arm.com>2020-03-13 14:34:58 +0530
committerCatalin Marinas <catalin.marinas@arm.com>2020-03-18 09:50:20 +0000
commit689eae42afd7a916634146edca38463769969184 (patch)
treea25366240fbe164c1641b13e9d27ee2474581de8 /arch/arm64/include/asm/pointer_auth.h
parent28321582334c261c13b20d7efe634e610b4c100b (diff)
arm64: mask PAC bits of __builtin_return_address
Functions like vmap() record how much memory has been allocated by their callers, and callers are identified using __builtin_return_address(). Once the kernel is using pointer-auth the return address will be signed. This means it will not match any kernel symbol, and will vary between threads even for the same caller. The output of /proc/vmallocinfo in this case may look like, 0x(____ptrval____)-0x(____ptrval____) 20480 0x86e28000100e7c60 pages=4 vmalloc N0=4 0x(____ptrval____)-0x(____ptrval____) 20480 0x86e28000100e7c60 pages=4 vmalloc N0=4 0x(____ptrval____)-0x(____ptrval____) 20480 0xc5c78000100e7c60 pages=4 vmalloc N0=4 The above three 64bit values should be the same symbol name and not different LR values. Use the pre-processor to add logic to clear the PAC to __builtin_return_address() callers. This patch adds a new file asm/compiler.h and is transitively included via include/compiler_types.h on the compiler command line so it is guaranteed to be loaded and the users of this macro will not find a wrong version. Helper macros ptrauth_kernel_pac_mask/ptrauth_clear_pac are created for this purpose and added in this file. Existing macro ptrauth_user_pac_mask moved from asm/pointer_auth.h. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64/include/asm/pointer_auth.h')
-rw-r--r--arch/arm64/include/asm/pointer_auth.h9
1 files changed, 1 insertions, 8 deletions
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index 833d3f948de0..70c47156e54b 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -68,16 +68,9 @@ static __always_inline void ptrauth_keys_switch_kernel(struct ptrauth_keys_kerne
extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
-/*
- * The EL0 pointer bits used by a pointer authentication code.
- * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
- */
-#define ptrauth_user_pac_mask() GENMASK(54, vabits_actual)
-
-/* Only valid for EL0 TTBR0 instruction pointers */
static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
{
- return ptr & ~ptrauth_user_pac_mask();
+ return ptrauth_clear_pac(ptr);
}
#define ptrauth_thread_init_user(tsk) \