summaryrefslogtreecommitdiff
path: root/kernel/irq/chip.c
AgeCommit message (Collapse)Author
2025-05-15genirq: Retain disable depth for managed interrupts across CPU hotplugBrian Norris
Affinity-managed interrupts can be shut down and restarted during CPU hotunplug/plug. Thereby the interrupt may be left in an unexpected state. Specifically: 1. Interrupt is affine to CPU N 2. disable_irq() -> depth is 1 3. CPU N goes offline 4. irq_shutdown() -> depth is set to 1 (again) 5. CPU N goes online 6. irq_startup() -> depth is set to 0 (BUG! driver expects that the interrupt still disabled) 7. enable_irq() -> depth underflow / unbalanced enable_irq() warning This is only a problem for managed interrupts and CPU hotplug, all other cases like request()/free()/request() truly needs to reset a possibly stale disable depth value. Provide a startup function, which takes the disable depth into account, and invoked it for the managed interrupts in the CPU hotplug path. This requires to change irq_shutdown() to do a depth increment instead of setting it to 1, which allows to retain the disable depth, but is harmless for the other code paths using irq_startup(), which will still reset the disable depth unconditionally to keep the original correct behaviour. A kunit tests will be added separately to cover some of these aspects. [ tglx: Massaged changelog ] Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Brian Norris <briannorris@chromium.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20250514201353.3481400-2-briannorris@chromium.org
2025-05-09genirq: Fix inverted condition in handle_nested_irq()Thomas Gleixner
Marek reported that the rework of handle_nested_irq() introduced a inverted condition, which prevents handling of interrupts. Fix it up. Fixes: 2ef2e13094c7 ("genirq/chip: Rework handle_nested_irq()") Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Closes: https://lore.kernel/org/all/46ed4040-ca11-4157-8bd7-13c04c113734@samsung.com
2025-05-07genirq/chip: Rework irq_modify_status()Thomas Gleixner
Use the new guards to get and lock the interrupt descriptor and tidy up the code. Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.650454052@linutronix.de
2025-05-07genirq/chip: Rework irq_set_handler() variantsThomas Gleixner
Use the new guards to get and lock the interrupt descriptor and tidy up the code. Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.590753128@linutronix.de
2025-05-07genirq/chip: Rework irq_set_chip_data()Thomas Gleixner
Use the new guards to get and lock the interrupt descriptor and tidy up the code. Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.532308759@linutronix.de
2025-05-07genirq/chip: Rework irq_set_msi_desc_off()Thomas Gleixner
Use the new guards to get and lock the interrupt descriptor and tidy up the code. Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.473563978@linutronix.de
2025-05-07genirq/chip: Rework irq_set_handler_data()Thomas Gleixner
Use the new guards to get and lock the interrupt descriptor and tidy up the code. Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.415072350@linutronix.de
2025-05-07genirq/chip: Rework irq_set_irq_type()Thomas Gleixner
Use the new guards to get and lock the interrupt descriptor and tidy up the code. Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.355673840@linutronix.de
2025-05-07genirq/chip: Rework irq_set_chip()Thomas Gleixner
Use the new guards to get and lock the interrupt descriptor and tidy up the code. Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.295400891@linutronix.de
2025-05-07genirq/chip: Use lock guards where applicableThomas Gleixner
Convert all lock/unlock pairs to guards and tidy up the code. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.236248749@linutronix.de
2025-05-07genirq/chip: Rework handle_fasteoi_mask_irq()Thomas Gleixner
Use the new helpers to decide whether the interrupt should be handled and switch the descriptor locking to guard(). Note: The mask_irq() operation in the second condition was redundant as the interrupt is already masked right at the beginning of the function. Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.175652864@linutronix.de
2025-05-07genirq/chip: Rework handle_fasteoi_ack_irq()Thomas Gleixner
Use the new helpers to decide whether the interrupt should be handled and switch the descriptor locking to guard(). Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.105015800@linutronix.de
2025-05-07genirq/chip: Rework handle_edge_irq()Thomas Gleixner
Use the new helpers to decide whether the interrupt should be handled and switch the descriptor locking to guard(). Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065421.045492336@linutronix.de
2025-05-07genirq/chip: Rework handle_eoi_irq()Thomas Gleixner
Use the new helpers to decide whether the interrupt should be handled and switch the descriptor locking to guard(). Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065420.986002418@linutronix.de
2025-05-07genirq/chip: Rework handle_level_irq()Thomas Gleixner
Use the new helpers to decide whether the interrupt should be handled and switch the descriptor locking to guard(). Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065420.926362488@linutronix.de
2025-05-07genirq/chip: Rework handle_untracked_irq()Thomas Gleixner
Use the new helpers to decide whether the interrupt should be handled and switch the descriptor locking to guard(). Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065420.865212916@linutronix.de
2025-05-07genirq/chip: Rework handle_simple_irq()Thomas Gleixner
Use the new helpers to decide whether the interrupt should be handled and switch the descriptor locking to guard(). Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065420.804683349@linutronix.de
2025-05-07genirq/chip: Rework handle_nested_irq()Thomas Gleixner
Use the new helpers to decide whether the interrupt should be handled and switch the descriptor locking to guard(). Fixup the kernel doc comment while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065420.744042890@linutronix.de
2025-05-07genirq/chip: Prepare for code reductionThomas Gleixner
The interrupt flow handlers have similar patterns to decide whether to handle an interrupt or not. Provide common helper functions to allow removal of duplicated code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20250429065420.682547546@linutronix.de
2025-03-27Merge tag 'powerpc-6.15-1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc updates from Madhavan Srinivasan: - Remove support for IBM Cell Blades - SMP support for microwatt platform - Support for inline static calls on PPC32 - Enable pmu selftests for power11 platform - Enable hardware trace macro (HTM) hcall support - Support for limited address mode capability - Changes to RMA size from 512 MB to 768 MB to handle fadump - Misc fixes and cleanups Thanks to Abhishek Dubey, Amit Machhiwal, Andreas Schwab, Arnd Bergmann, Athira Rajeev, Avnish Chouhan, Christophe Leroy, Disha Goel, Donet Tom, Gaurav Batra, Gautam Menghani, Hari Bathini, Kajol Jain, Kees Cook, Mahesh Salgaonkar, Michael Ellerman, Paul Mackerras, Ritesh Harjani (IBM), Sathvika Vasireddy, Segher Boessenkool, Sourabh Jain, Vaibhav Jain, and Venkat Rao Bagalkote. * tag 'powerpc-6.15-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (61 commits) powerpc/kexec: fix physical address calculation in clear_utlb_entry() crypto: powerpc: Mark ghashp8-ppc.o as an OBJECT_FILES_NON_STANDARD powerpc: Fix 'intra_function_call not a direct call' warning powerpc/perf: Fix ref-counting on the PMU 'vpa_pmu' KVM: PPC: Enable CAP_SPAPR_TCE_VFIO on pSeries KVM guests powerpc/prom_init: Fixup missing #size-cells on PowerBook6,7 powerpc/microwatt: Add SMP support powerpc: Define config option for processors with broadcast TLBIE powerpc/microwatt: Define an idle power-save function powerpc/microwatt: Device-tree updates powerpc/microwatt: Select COMMON_CLK in order to get the clock framework net: toshiba: Remove reference to PPC_IBM_CELL_BLADE net: spider_net: Remove powerpc Cell driver cpufreq: ppc_cbe: Remove powerpc Cell driver genirq: Remove IRQ_EDGE_EOI_HANDLER docs: Remove reference to removed CBE_CPUFREQ_SPU_GOVERNOR powerpc: Remove UDBG_RTAS_CONSOLE powerpc/io: Use standard barrier macros in io.c powerpc/io: Rename _insw_ns() etc. powerpc/io: Use generic raw accessors ...
2025-03-10genirq: Make a few functions staticThomas Gleixner
None of these functions are used outside of their source files. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/878qpe2gnx.ffs@tglx
2025-02-26genirq: Remove IRQ_EDGE_EOI_HANDLERMichael Ellerman
The powerpc Cell blade support, now removed, was the only user of IRQ_EDGE_EOI_HANDLER, so remove it. Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241218105523.416573-21-mpe@ellerman.id.au
2025-01-15genirq: Remove IRQ_MOVE_PCNTXT and related codeThomas Gleixner
Now that x86 is converted over to use the IRQCHIP_MOVE_DEFERRED flags, remove IRQ*_MOVE_PCNTXT and related code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20241210103335.626707225@linutronix.de
2025-01-15genirq: Provide IRQCHIP_MOVE_DEFERREDThomas Gleixner
The logic of GENERIC_PENDING_IRQ is backwards for historical reasons. Most interrupt controllers allow to move the interrupt from arbitrary contexts. If GENERIC_PENDING_IRQ is enabled by an architecture to support a chip, which requires the affinity change to happen in interrupt context, all other chips have to be marked with IRQF_MOVE_PCNTXT. That's tedious and there is no real good reason for the extra flags in the irq descriptor and the irq data status fields. In fact the decision whether interrupts can be moved in arbitrary context or not is a property of the interrupt chip. To simplify adoption for RISC-V provide a new mechanism which is enabled via a config switch and allows to add a flag to irq_chip::flags to request that interrupt affinity changes are deferred. Setting the top level chip of an interrupt evaluates the flag and maps it into the existing logic. The config switch and the various PCNTXT flags are temporary until x86 is converted over to this scheme. This intermediate step also allows trivial backporting of the mechanism to plug the affinity change race of various RISC-V interrupt controllers. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20241210103335.500314436@linutronix.de
2024-09-08genirq: Use cpumask_intersects()Costa Shulyupin
Replace `cpumask_any_and(a, b) >= nr_cpu_ids` and `cpumask_any_and(a, b) < nr_cpu_ids` with the more readable `!cpumask_intersects(a, b)` and `cpumask_intersects(a, b)` Signed-off-by: Costa Shulyupin <costa.shul@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20240906170142.1135207-1-costa.shul@redhat.com
2023-07-31genirq: Prevent nested thread vs synchronize_hardirq() deadlockVincent Whitchurch
There is a possibility of deadlock if synchronize_hardirq() is called when the nested threaded interrupt is active. The following scenario was observed on a uniprocessor PREEMPT_NONE system: Thread 1 Thread 2 handle_nested_thread() Set INPROGRESS Call ->thread_fn() thread_fn goes to sleep free_irq() __synchronize_hardirq() Busy-loop forever waiting for INPROGRESS to be cleared The INPROGRESS flag is only supposed to be used for hard interrupt handlers. Remove the incorrect usage in the nested threaded interrupt case and instead re-use the threads_active / wait_for_threads mechanism to wait for nested threaded interrupts to complete. Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230613-genirq-nested-v3-1-ae58221143eb@axis.com
2023-06-26Merge tag 'irqchip-6.5' of ↵Thomas Gleixner
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/core Pull irqchip updates from Marc Zyngier: - A number of Loogson/Loogarch fixes - Allow the core code to retrigger an interrupt that has fired while the same interrupt is being handled on another CPU, papering over a GICv3 architecture issue - Work around an integration problem on ASR8601, where the CPU numbering isn't representable in the GIC implementation... - Add some missing interrupt to the STM32 irqchip - A bunch of warning squashing triggered by W=1 builds Link: https://lore.kernel.org/r/20230623224345.3577134-1-maz@kernel.org
2023-06-16genirq: Allow fasteoi handler to resend interrupts on concurrent handlingJames Gowans
There is a class of interrupt controllers out there that, once they have signalled a given interrupt number, will still signal incoming instances of the *same* interrupt despite the original interrupt not having been EOIed yet. As long as the new interrupt reaches the *same* CPU, nothing bad happens, as that CPU still has its interrupts globally disabled, and we will only take the new interrupt once the interrupt has been EOIed. However, things become more "interesting" if an affinity change comes in while the interrupt is being handled. More specifically, while the per-irq lock is being dropped. This results in the affinity change taking place immediately. At this point, there is nothing that prevents the interrupt from firing on the new target CPU. We end-up with the interrupt running concurrently on two CPUs, which isn't a good thing. And that's where things become worse: the new CPU notices that the interrupt handling is in progress (irq_may_run() return false), and *drops the interrupt on the floor*. The whole race looks like this: CPU 0 | CPU 1 -----------------------------|----------------------------- interrupt start | handle_fasteoi_irq | set_affinity(CPU 1) handler | ... | interrupt start ... | handle_fasteoi_irq -> early out handle_fasteoi_irq return | interrupt end interrupt end | If the interrupt was an edge, too bad. The interrupt is lost, and the system will eventually die one way or another. Not great. A way to avoid this situation is to detect this problem at the point we handle the interrupt on the new target. Instead of dropping the interrupt, use the resend mechanism to force it to be replayed. Also, in order to limit the impact of this workaround to the pathetic architectures that require it, gate it behind a new irq flag aptly named IRQD_RESEND_WHEN_IN_PROGRESS. Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: James Gowans <jgowans@amazon.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Marc Zyngier <maz@kernel.org> Cc: KarimAllah Raslan <karahmed@amazon.com> Cc: Yipeng Zou <zouyipeng@huawei.com> Cc: Zhang Jianhua <chris.zjh@huawei.com> [maz: reworded commit mesage] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230608120021.3273400-3-jgowans@amazon.com
2023-05-24genirq: Use hlist for managing resend handlersShanker Donthineni
The current implementation utilizes a bitmap for managing interrupt resend handlers, which is allocated based on the SPARSE_IRQ/NR_IRQS macros. However, this method may not efficiently utilize memory during runtime, particularly when IRQ_BITMAP_BITS is large. Address this issue by using an hlist to manage interrupt resend handlers instead of relying on a static bitmap memory allocation. Additionally, a new function, clear_irq_resend(), is introduced and called from irq_shutdown to ensure a graceful teardown of the interrupt. Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230519134902.1495562-2-sdonthineni@nvidia.com
2022-12-05genirq/irqdomain: Rename irq_domain::dev to irq_domain:: Pm_devThomas Gleixner
irq_domain::dev is a misnomer as it's usually the rule that a device pointer points to something which is directly related to the instance. irq_domain::dev can point to some other device for power management to ensure that this underlying device is not powered down when an interrupt is allocated. The upcoming per device MSI domains really require a pointer to the device which instantiated the irq domain and not to some random other device which is required for power management down the chain. Rename irq_domain::dev to irq_domain::pm_dev and fixup the few sites which use that pointer. Conversion was done with the help of coccinelle. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221124230313.574541683@linutronix.de
2022-07-10Merge branch irq/plic-masking into irq/irqchip-nextMarc Zyngier
* irq/plic-masking: : . : SiFive PLIC optimisations from Samuel Holland: : : "This series removes the spinlocks and cpumask operations from the PLIC : driver's hot path. As far as I know, using the priority to mask : interrupts is an intended usage and will work on all existing : implementations. [...]" : . irqchip/sifive-plic: Separate the enable and mask operations irqchip/sifive-plic: Make better use of the effective affinity mask PCI: hv: Take a const cpumask in hv_compose_msi_req_get_cpu() genirq: Provide an IRQ affinity mask in non-SMP configs genirq: Return a const cpumask from irq_data_get_affinity_mask genirq: Add and use an irq_data_update_affinity helper genirq: Refactor accessors to use irq_data_get_affinity_mask genirq: Drop redundant irq_init_effective_affinity genirq: GENERIC_IRQ_EFFECTIVE_AFF_MASK depends on SMP genirq: GENERIC_IRQ_IPI depends on SMP irqchip/mips-gic: Only register IPI domain when SMP is enabled Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-07-07genirq: Return a const cpumask from irq_data_get_affinity_maskSamuel Holland
Now that the irq_data_update_affinity helper exists, enforce its use by returning a a const cpumask from irq_data_get_affinity_mask. Since the previous commit already updated places that needed to call irq_data_update_affinity, this commit updates the remaining code that either did not modify the cpumask or immediately passed the modified mask to irq_set_affinity. Signed-off-by: Samuel Holland <samuel@sholland.org> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220701200056.46555-8-samuel@sholland.org
2022-07-07genirq: Don't return error on missing optional irq_request_resources()Antonio Borneo
Function irq_chip::irq_request_resources() is reported as optional in the declaration of struct irq_chip. If the parent irq_chip does not implement it, we should ignore it and return. Don't return error if the functions is missing. Signed-off-by: Antonio Borneo <antonio.borneo@foss.st.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220512160544.13561-1-antonio.borneo@foss.st.com
2022-06-09genirq: PM: Use runtime PM for chained interruptsMarc Zyngier
When requesting an interrupt, we correctly call into the runtime PM framework to guarantee that the underlying interrupt controller is up and running. However, we fail to do so for chained interrupt controllers, as the mux interrupt is not requested along the same path. Augment __irq_do_set_handler() to call into the runtime PM code in this case, making sure the PM flow is the same for all interrupts. Reported-by: Lucas Stach <l.stach@pengutronix.de> Tested-by: Liu Ying <victor.liu@nxp.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/26973cddee5f527ea17184c0f3fccb70bc8969a0.camel@pengutronix.de
2022-05-02genirq: Use pm_runtime_resume_and_get() instead of pm_runtime_get_sync()Minghao Chi
pm_runtime_resume_and_get() achieves the same and simplifies the code. [ tglx: Simplify it further by presetting retval ] Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20220418110716.2559453-1-chi.minghao@zte.com.cn
2022-02-15genirq: Allow irq_chip registration functions to take a const irq_chipMarc Zyngier
In order to let a const irqchip be fed to the irqchip layer, adjust the various prototypes. An extra cast in irq_set_chip()() is required to avoid a warning. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20220209162607.1118325-3-maz@kernel.org
2022-02-10genirq: Kill irq_chip::parent_deviceMarc Zyngier
Now that noone is using irq_chip::parent_device in the tree, get rid of it. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Bartosz Golaszewski <brgl@bgdev.pl> Link: https://lore.kernel.org/r/20220201120310.878267-13-maz@kernel.org
2022-02-09genirq: Allow the PM device to originate from irq domainMarc Zyngier
As a preparation to moving the reference to the device used for runtime power management, add a new 'dev' field to the irqdomain structure for that exact purpose. The irq_chip_pm_{get,put}() helpers are made aware of the dual location via a new private helper. No functional change intended. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Tested-by: Tony Lindgren <tony@atomide.com> Acked-by: Bartosz Golaszewski <brgl@bgdev.pl> Link: https://lore.kernel.org/r/20220201120310.878267-2-maz@kernel.org
2022-01-07irq: remove unused flags argument from __handle_irq_event_percpu()Sebastian Andrzej Siewior
The __IRQF_TIMER bit from the flags argument was used in add_interrupt_randomness() to distinguish the timer interrupt from other interrupts. This is no longer the case. Remove the flags argument from __handle_irq_event_percpu(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-10-26genirq: Hide irq_cpu_{on,off}line() behind a deprecated optionMarc Zyngier
irq_cpu_{on,off}line() are now only used by the Octeon platform. Make their use conditional on this plaform being enabled, and otherwise hidden away. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20211021170414.3341522-4-maz@kernel.org
2021-08-10genirq: Provide IRQCHIP_AFFINITY_PRE_STARTUPThomas Gleixner
X86 IO/APIC and MSI interrupts (when used without interrupts remapping) require that the affinity setup on startup is done before the interrupt is enabled for the first time as the non-remapped operation mode cannot safely migrate enabled interrupts from arbitrary contexts. Provide a new irq chip flag which allows affected hardware to request this. This has to be opt-in because there have been reports in the past that some interrupt chips cannot handle affinity setting before startup. Fixes: 18404756765c ("genirq: Expose default irq affinity mask (take 3)") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.779791738@linutronix.de
2021-05-17genirq: Add a IRQF_NO_DEBUG flagThomas Gleixner
The whole call to note_interrupt() can be avoided or return early when interrupts would be marked accordingly. For IPI handlers which always return HANDLED the whole procedure is pretty pointless to begin with. Add a IRQF_NO_DEBUG flag and mark the interrupt accordingly if supplied when the interrupt is requested. When noirqdebug is set on the kernel commandline, then the interrupt is marked unconditionally so that there is only one condition in the hotpath to evaluate. [ clg: Add changelog ] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/7a8ad02f-63a8-c1aa-fdd1-39d973593d02@kaod.org
2021-03-22irq: Fix typos in commentsIngo Molnar
Fix ~36 single-word typos in the IRQ, irqchip and irqdomain code comments. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Marc Zyngier <maz@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-03-16genirq: Fix typos and misspellings in commentsKrzysztof Kozlowski
No functional change. Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210316100205.23492-1-krzysztof.kozlowski@canonical.com
2020-12-15Merge tag 'irqchip-5.11' of ↵Thomas Gleixner
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/core Pull irqchip updates for 5.11 from Marc Zyngier: - Preliminary support for managed interrupts on platform devices - Correctly identify allocation of MSIs proxyied by another device - Remove the fasteoi IPI flow which has been proved useless - Generalise the Ocelot support to new SoCs - Improve GICv4.1 vcpu entry, matching the corresponding KVM optimisation - Work around spurious interrupts on Qualcomm PDC - Random fixes and cleanups Link: https://lore.kernel.org/r/20201212135626.1479884-1-maz@kernel.org
2020-12-11Revert "genirq: Add fasteoi IPI flow"Valentin Schneider
handle_percpu_devid_fasteoi_ipi() has no more users, and handle_percpu_devid_irq() can do all that it was supposed to do. Get rid of it. This reverts commit c5e5ec033c4ab25c53f1fd217849e75deb0bf7bf. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201109094121.29975-6-valentin.schneider@arm.com
2020-11-16genirq: Fix kernel-doc markupsMauro Carvalho Chehab
Some identifiers have different names between their prototypes and the kernel-doc markup. Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/13a44f4f0c3135e14b16ae8fcce4af1eab27cb5f.1605521731.git.mchehab+huawei@kernel.org
2020-10-12Merge tag 'x86-irq-2020-10-12' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 irq updates from Thomas Gleixner: "Surgery of the MSI interrupt handling to prepare the support of upcoming devices which require non-PCI based MSI handling: - Cleanup historical leftovers all over the place - Rework the code to utilize more core functionality - Wrap XEN PCI/MSI interrupts into an irqdomain to make irqdomain assignment to PCI devices possible. - Assign irqdomains to PCI devices at initialization time which allows to utilize the full functionality of hierarchical irqdomains. - Remove arch_.*_msi_irq() functions from X86 and utilize the irqdomain which is assigned to the device for interrupt management. - Make the arch_.*_msi_irq() support conditional on a config switch and let the last few users select it" * tag 'x86-irq-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (40 commits) PCI: MSI: Fix Kconfig dependencies for PCI_MSI_ARCH_FALLBACKS x86/apic/msi: Unbreak DMAR and HPET MSI iommu/amd: Remove domain search for PCI/MSI iommu/vt-d: Remove domain search for PCI/MSI[X] x86/irq: Make most MSI ops XEN private x86/irq: Cleanup the arch_*_msi_irqs() leftovers PCI/MSI: Make arch_.*_msi_irq[s] fallbacks selectable x86/pci: Set default irq domain in pcibios_add_device() iommm/amd: Store irq domain in struct device iommm/vt-d: Store irq domain in struct device x86/xen: Wrap XEN MSI management into irqdomain irqdomain/msi: Allow to override msi_domain_alloc/free_irqs() x86/xen: Consolidate XEN-MSI init x86/xen: Rework MSI teardown x86/xen: Make xen_msi_init() static and rename it to xen_hvm_msi_init() PCI/MSI: Provide pci_dev_has_special_msi_domain() helper PCI_vmd_Mark_VMD_irqdomain_with_DOMAIN_BUS_VMD_MSI irqdomain/msi: Provide DOMAIN_BUS_VMD_MSI x86/irq: Initialize PCI/MSI domain at PCI init time x86/pci: Reducde #ifdeffery in PCI init code ...
2020-09-16genirq/chip: Use the first chip in irq_chip_compose_msi_msg()Thomas Gleixner
The documentation of irq_chip_compose_msi_msg() claims that with hierarchical irq domains the first chip in the hierarchy which has an irq_compose_msi_msg() callback is chosen. But the code just keeps iterating after it finds a chip with a compose callback. The x86 HPET MSI implementation relies on that behaviour, but that does not make it more correct. The message should always be composed at the domain which manages the underlying resource (e.g. APIC or remap table) because that domain knows about the required layout of the message. On X86 the following hierarchies exist: 1) vector -------- PCI/MSI 2) vector -- IR -- PCI/MSI The vector domain has a different message format than the IR (remapping) domain. So obviously the PCI/MSI domain can't compose the message without having knowledge about the parent domain, which is exactly the opposite of what hierarchical domains want to achieve. X86 actually has two different PCI/MSI chips where #1 has a compose callback and #2 does not. #2 delegates the composition to the remap domain where it belongs, but #1 does it at the PCI/MSI level. For the upcoming device MSI support it's necessary to change this and just let the first domain which can compose the message take care of it. That way the top level chip does not have to worry about it and the device MSI code does not need special knowledge about topologies. It just sets the compose callback to NULL and lets the hierarchy pick the first chip which has one. Due to that the attempt to move the compose callback from the direct delivery PCI/MSI domain to the vector domain made the system fail to boot with interrupt remapping enabled because in the remapping case irq_chip_compose_msi_msg() keeps iterating and choses the compose callback of the vector domain which obviously creates the wrong format for the remap table. Break out of the loop when the first irq chip with a compose callback is found and fixup the HPET code temporarily. That workaround will be removed once the direct delivery compose callback is moved to the place where it belongs in the vector domain. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200826112331.047917603@linutronix.de
2020-09-13genirq: Add fasteoi IPI flowMarc Zyngier
For irqchips using the fasteoi flow, IPIs are a bit special. They need to be EOI'd early (before calling the handler), as funny things may happen in the handler (they do not necessarily behave like a normal interrupt). Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>