summaryrefslogtreecommitdiff
path: root/kernel/sched/idle.c
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2023-01-12 20:43:35 +0100
committerIngo Molnar <mingo@kernel.org>2023-01-13 11:48:15 +0100
commit89b3098703bd2aa3237ef10a704e6a5838e6ea69 (patch)
tree10d881133134e9ead2c7478b2d353a958f5f7c0f /kernel/sched/idle.c
parent9b461a6faae7b220c32466261965778b10189e54 (diff)
arch/idle: Change arch_cpu_idle() behavior: always exit with IRQs disabled
Current arch_cpu_idle() is called with IRQs disabled, but will return with IRQs enabled. However, the very first thing the generic code does after calling arch_cpu_idle() is raw_local_irq_disable(). This means that architectures that can idle with IRQs disabled end up doing a pointless 'enable-disable' dance. Therefore, push this IRQ disabling into the idle function, meaning that those architectures can avoid the pointless IRQ state flipping. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Tony Lindgren <tony@atomide.com> Tested-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com> Acked-by: Mark Rutland <mark.rutland@arm.com> [arm64] Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Guo Ren <guoren@kernel.org> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20230112195540.618076436@infradead.org
Diffstat (limited to 'kernel/sched/idle.c')
-rw-r--r--kernel/sched/idle.c2
1 files changed, 0 insertions, 2 deletions
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index e924602ec43b..e9ef66be2870 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -79,7 +79,6 @@ void __weak arch_cpu_idle_dead(void) { }
void __weak arch_cpu_idle(void)
{
cpu_idle_force_poll = 1;
- raw_local_irq_enable();
}
/**
@@ -96,7 +95,6 @@ void __cpuidle default_idle_call(void)
ct_cpuidle_enter();
arch_cpu_idle();
- raw_local_irq_disable();
ct_cpuidle_exit();
start_critical_timings();