summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/archrandom.h
diff options
context:
space:
mode:
authorMark Rutland <mark.rutland@arm.com>2020-10-05 17:43:03 +0100
committerWill Deacon <will@kernel.org>2020-10-05 18:54:49 +0100
commit353e228eb355be5a65a3c0996c774a0f46737fda (patch)
tree1bd5619d6b4765e8dd452f4144e20e0bfe04dc94 /arch/arm64/include/asm/archrandom.h
parent4dafc08d0ba4768e8540f49ab40c3ea26e40d554 (diff)
arm64: initialize per-cpu offsets earlier
The current initialization of the per-cpu offset register is difficult to follow and this initialization is not always early enough for upcoming instrumentation with KCSAN, where the instrumentation callbacks use the per-cpu offset. To make it possible to support KCSAN, and to simplify reasoning about early bringup code, let's initialize the per-cpu offset earlier, before we run any C code that may consume it. To do so, this patch adds a new init_this_cpu_offset() helper that's called before the usual primary/secondary start functions. For consistency, this is also used to re-initialize the per-cpu offset after the runtime per-cpu areas have been allocated (which can change CPU0's offset). So that init_this_cpu_offset() isn't subject to any instrumentation that might consume the per-cpu offset, it is marked with noinstr, preventing instrumentation. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201005164303.21389-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'arch/arm64/include/asm/archrandom.h')
0 files changed, 0 insertions, 0 deletions