summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel/perf_callchain.c
diff options
context:
space:
mode:
authorMark Rutland <mark.rutland@arm.com>2021-11-29 14:28:43 +0000
committerCatalin Marinas <catalin.marinas@arm.com>2021-12-10 14:06:03 +0000
commit86bcbafcb726b7b11898d2d6269bd665cb27c1b9 (patch)
tree8905a84b3a9e76824881bb0a2dd2ef009c042021 /arch/arm64/kernel/perf_callchain.c
parent1e5428b2b7e8aef6a1d10a33fa15df427f087450 (diff)
arm64: Mark __switch_to() as __sched
Unlike most architectures (and only in keeping with powerpc), arm64 has a non __sched() function on the path to our cpu_switch_to() assembly function. It is expected that for a blocked task, in_sched_functions() can be used to skip all functions between the raw context switch assembly and the scheduler functions that call into __switch_to(). This is the behaviour expected by stack_trace_consume_entry_nosched(), and the behaviour we'd like to have such that we an simplify arm64's __get_wchan() implementation to use arch_stack_walk(). This patch mark's arm64's __switch_to as __sched. This *will not* change the behaviour of arm64's current __get_wchan() implementation, which always performs an initial unwind step which skips __switch_to(). This *will* change the behaviour of stack_trace_consume_entry_nosched() and stack_trace_save_tsk() to match their expected behaviour on blocked tasks, skipping all scheduler-internal functions including __switch_to(). Other than the above, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20211129142849.3056714-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64/kernel/perf_callchain.c')
0 files changed, 0 insertions, 0 deletions